W3C home > Mailing lists > Public > public-webrtc@w3.org > December 2015

Re: Constraints and Encoding Parameters

From: Bernard Aboba <Bernard.Aboba@microsoft.com>
Date: Tue, 29 Dec 2015 02:10:08 +0000
To: Peter Thatcher <pthatcher@google.com>
CC: "public-ortc@w3.org" <public-ortc@w3.org>, "public-webrtc@w3.org" <public-webrtc@w3.org>
Message-ID: <5ADCEFF8-0C43-4660-A4E6-D3B7A8F8D2CF@microsoft.com>
On Dec 28, 2015, at 12:43 PM, Peter Thatcher <pthatcher@google.com<mailto:pthatcher@google.com>> wrote:

In my opinion, the real question is: when the framerate of the source track changes or the framerate is degraded, what should happen to different encodings that have different values of  maxFramerate/scaleFramerateDownBy?  For example, if we have [{maxFramerate: 15}, {maxFramerate: 30}] and the camera drops the source framerate to 12, do we end up with two encodings of 12?   Or one with 6 and one with 12?

[BA] I am not sure the answer to this question is the same for every simulcast scenario. If the simulcast streams also differ by resolution, then having two encodings of 12 fps could be ok. However, if the simulcast streams have the same resolution, then having the same frame rate as well seems undesirable.

That said, I would question the value of pure frame rate simulcast. IMHO, it is only useful for codecs without temporal scalability (e.g. H.264/AVC), and if it is only useful for a single codec, is it worth optimizing for?

To my mind something like maxTemporal might be a better alternative to scaleFrameRateDownBy for encoding control of temporal scalability, since in Temporal scalability only geometric values of scaleFrameRateDownBy are valid.

That is my logic for thinking we can live without scaleFrameRateDownBy in both WebRTC and ORTC.
Received on Tuesday, 29 December 2015 02:10:40 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:47 UTC