Re: Constraints and Encoding Parameters

On Dec 28, 2015, at 12:43 PM, Peter Thatcher <pthatcher@google.com<mailto:pthatcher@google.com>> wrote:

In my opinion, the real question is: when the framerate of the source track changes or the framerate is degraded, what should happen to different encodings that have different values of  maxFramerate/scaleFramerateDownBy?  For example, if we have [{maxFramerate: 15}, {maxFramerate: 30}] and the camera drops the source framerate to 12, do we end up with two encodings of 12?   Or one with 6 and one with 12?

[BA] I am not sure the answer to this question is the same for every simulcast scenario. If the simulcast streams also differ by resolution, then having two encodings of 12 fps could be ok. However, if the simulcast streams have the same resolution, then having the same frame rate as well seems undesirable.

That said, I would question the value of pure frame rate simulcast. IMHO, it is only useful for codecs without temporal scalability (e.g. H.264/AVC), and if it is only useful for a single codec, is it worth optimizing for?

To my mind something like maxTemporal might be a better alternative to scaleFrameRateDownBy for encoding control of temporal scalability, since in Temporal scalability only geometric values of scaleFrameRateDownBy are valid.

That is my logic for thinking we can live without scaleFrameRateDownBy in both WebRTC and ORTC.

Received on Tuesday, 29 December 2015 02:10:40 UTC