W3C home > Mailing lists > Public > public-webrtc@w3.org > February 2012

Re: Constraints structure and Capabilities API

From: Randell Jesup <randell-ietf@jesup.org>
Date: Fri, 24 Feb 2012 12:27:34 -0500
Message-ID: <4F47C886.6060007@jesup.org>
To: public-webrtc@w3.org
On 2/24/2012 11:53 AM, Rich Tibbett wrote:
> Randell Jesup wrote:
>
>>> I think focus on the p2p use case has been at the detriment of
>>> consideration of local use cases. In all three of the local cases
>>> above that do not require peer-to-peer streaming it would be ideal
>>> simply to have the highest quality video and audio that can be
>>> provided by the UA returned for local usage.
>>
>> I'll just note that "highest quality" is a very fluid thing in video. Is
>> it highest framerate? Highest resolution? How does light level affect
>> it?Noise level (related to light)?
>
> It's native framerate and native resolution. Light/Noise balance 
> should be auto configured in the implementation. Developers shouldn't 
> (and I'll suggest won't) go to this level of configuration the 
> majority of the time.

IMHO, there really is no such thing as "native framerate and resolution" 
in modern camera chips.  There's sensor resolution and max frame rate, 
but the camera might only be able to capture (and transfer) sensor 
resolution at low framerate (especially if it happens to be hooked up 
with a lower-rate connection like USB 1.1).  Modern cameras have lots of 
ability to trade off resolution vs frame-rate vs noise.  And it might 
capture 60fps at full resolution in high light, but 5fps (or less) in 
low light, or 60fps in low light with a ton of noise, or 60fps at 1/4 
resolution nicely.  Which is "native"?

>
>> Is consistent framerate important?
>
> We could discuss this further but it may make sense to have some 
> consistency in this regard.

Depends on the application.  For video chat or for video recording, 
*normally* I want rock-solid frame rates.  Other uses (and some chat 
uses) may have different needs, like maximum quality.

>
>> What happens when a camera has a built-in encoder (and some do, now)?
>
> Codecs and encoding have little actual value if it's simply a pipe to 
> a local video element. This becomes significant only for downstream 
> APIs and the hooks for developers to select characterisitics for the 
> encoding should be applied at that level..if at all since e.g. UDP 
> negotiation gets us a long way towards knowing what we actually need 
> rather than what we want.

Except that you can't increase frame rate  "downstream", it has to be a 
configuration of the camera.  And if a downstream part only wants 
320x240, it's silly (as mentioned) to not let the camera downscale for 
you if it can.

-- 
Randell Jesup
randell-ietf@jesup.org
Received on Friday, 24 February 2012 17:29:37 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:27 UTC