W3C home > Mailing lists > Public > public-webrtc@w3.org > February 2012

Re: Constraints structure and Capabilities API

From: Randell Jesup <randell-ietf@jesup.org>
Date: Fri, 24 Feb 2012 09:32:33 -0500
Message-ID: <4F479F81.9010107@jesup.org>
To: public-webrtc@w3.org
On 2/24/2012 7:23 AM, Rich Tibbett wrote:
>
> Media that is going to be sent over a p2p connection and data that is 
> simply intended for local playback (e.g. as the back for an AR app), 
> local recording (e.g. for conference/dating/social network 
> introductions and pre-recorded messages) or local manipulation (e.g. 
> barcode scanning, face recognition) inherently have very different 
> properties.

Agreed (not 100% sure about 'very', but still, agreed).

>
> I think focus on the p2p use case has been at the detriment of 
> consideration of local use cases. In all three of the local cases 
> above that do not require peer-to-peer streaming it would be ideal 
> simply to have the highest quality video and audio that can be 
> provided by the UA returned for local usage.

I'll just note that "highest quality" is a very fluid thing in video.  
Is it highest framerate?  Highest resolution?  How does light level 
affect it?  Noise level (related to light)?  Is consistent framerate 
important?  What happens when a camera has a built-in encoder (and some 
do, now)?  What if the app wants to process the data (image recognition, 
etc), but to reduce processing load (or reduce low-light noise) wants a 
lower resolution or lower framerate or both?

So just be very careful about assuming that 'highest quality' is what 
you want for those local cases, and that it has any sort of fixed 
definition.

-- 
Randell Jesup
randell-ietf@jesup.org
Received on Friday, 24 February 2012 14:34:34 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:27 UTC