Re: maxHeight and maxWidth

Ugh, sent from the wrong address...

On 2/16/2016 9:35 PM, Peter Thatcher wrote:
> While thinking about this so more, I thought of another difficulty: if 
> resolution is degraded, should that degradation happen before or after 
> the max?
>
> In your use case with two layers (1 full, 1 max of 90 pixels tall), 
> what happens when the RtpSender degrades the resolution?  Let's say it 
> degrades a video frame an original height of 360 pixels down to a 
> height of 180 pixels.  Clearly, the full layer is now 180 pixels 
> high.  But what should it do with the max-height-90-pixel layer be?  
> Should it become 45 pixels tall or stay 90 pixels?

90.

> If it should stay 90, then what happens if the video further degrades 
> down to 1/4 of the original size, such that the full layer is also 90 
> pixels high?  Would they both be 90?

yes - but the engine running simulcast should (not spec language!) be 
smart enough to shut off the "full" layer at that point (when the two 
resolutions get "close enough".  Similarly, if you defined a 
maxwidth/height layer of 640x480, and feed in a camera at full res - the 
engine should be smart enough to: for HD input, provide 2 layers, one 
640x480, one full HD, and if the camera will only provide 640x480 to 
provide only a single layer.  If bandwidth degrades enough, the engine 
can degrade that layer even further.

Similarly, if you provide a ScaleBy of 2, and the bandwidth degrades 
enough, the top layer will (should be) be shut off/starved (though you 
could react by cutting the top-layer resolution, and the lower layer 
would get cur too).  If bandwidth degrades further, you don't (shouldn't 
IMHO) turn on the top layer at a lower res and move the lower layer to 
1/2 that, you should just degrade the lower layer.

This is part of the logic of the 
bandwidth-allocation-and-resolution-adaption code that lives in the 
browser.  It's not defined by the spec, but should do something 
"reasonable" and not too unexpected given the inputs it's processing.  
We should *not* try to lock down or even significantly constrain such 
algorithms; just provide them input on what the application would like 
to get out of it.


The other way you could go here would be to allow applications to make 
or overrule all these decisions.  This would require defining a complex 
API with a bunch of inputs (which would be hard to standardize across 
browsers), and also to run the algorithm in a worker since you can't 
have it waiting on GC - and even then you might have some 
control-latency issues.  Lets not go there.... And lets not go to "lock 
down a precise algorithm about how the browser/simulcast engine must 
react" (see jib and getUserMedia constraints).

-- 
Randell Jesup, Mozilla

Received on Wednesday, 17 February 2016 18:28:44 UTC