- From: Rich Tibbett <richt@opera.com>
- Date: Tue, 13 Mar 2012 19:28:28 +0100
- To: public-media-capture@w3.org
Dan Burnett wrote:
> Group,
>
> Here is the proposal for Constraints and Capabilities for the getUserMedia draft.
> Example:
>
> {camera001:{
> video-min-width: 800,
> video-max-width: 1024,
> video-min-height: 600,
> video-max-height: 768,
> video-min-aspectratio: 1.333333333333,
> video-max-aspectratio: 1.333333333333,
> video-min-framerate: 24,
> video-max-framerate: 60,
> video-min-pixelrate: 15,
> video-max-pixelrate: 47,
> video-min-timebetweenkeyframes; 20,
> video-max-timebetweenkeyframes: 40,
> video-min-bandwidth: 1.5,
> video-max-bandwidth: 3.5},
> camera002:{
> video-min-width: 1600,
> video-max-width: 1920,
> video-min-height: 1080,
> video-max-height: 1200,
> video-min-aspectratio: 1.33333333333,
> video-max-aspectratio: 1.77777777777,
> video-min-framerate: 24,
> video-max-framerate: 120,
> video-min-pixelrate: 57.6,
> video-max-pixelrate: 248,
> video-min-timebetweenkeyframes; 20,
> video-max-timebetweenkeyframes: 40,
> video-min-bandwidth: 8,
> video-max-bandwidth: 29.4},
> audio001:{
> audio-min-bandwidth: 1.4,
> audio-max-bandwidth: 128,
> audio-min-mos: 2,
> audio-max-mos: 5,
> audio-min-codinglatency: 10,
> audio-max-codinglatency: 50,
> audio-min-samplingrate: 8000,
> audio-max-samplingrate: 48000}}
>
As mentioned on the conf call, if this proposal is targeted at
getUserMedia, I'd like to see how it maps to some of this group's
original use cases and requirements - especially those documented in [1].
I do think it makes sense to apply characteristics at recording or
streaming time but I'm unsure as to why I would need to, as a web
developer, have to go and look up these parameters in a registry and
then configure a stream object to this level to get some local playback
going (it's very IANA and web apis typically don't require a manual, a
steep learning curve on the internals of codecs or registries).
My understanding as of now is that MediaStream is and should remain a
raw codec-less parameter-less singleton object per input source that is
shared between all callees to getUserMedia. That allows us to do a great
deal of code optimizations when doling out multiple MediaStream objects.
A MediaStream object is then likely to get some parameter refinement if
or when it is used in a secondary API - recording or streaming.
I expect a valid use case for applying constraints at getUserMedia
invocation is to allow hardware-based encoding parameters to be applied
at invocation time. I worry how that scales with multiple calls, each
with conflicting constraints, to getUserMedia rather than distributing a
single stream at local invocation time.
What we might want are intent-driven constraints applied at getUserMedia
invocation, essentially something very much like, or exactly the same as
[2], with more specific constraints applied further down the RTC
toolchain at local recording or peer establishment time, assuming in the
latter case that real meaningful codec/quality parameters cannot be
established as part of any SDP negotiation between peers.
We may absolutely need constraints and capabilities on getUserMedia but
it seems this thinking has come a long way from the established use
cases and requirements.
- Rich
[1] http://www.w3.org/TR/capture-scenarios/
[2] http://lists.w3.org/Archives/Public/public-webrtc/2012Jan/0047.html
Received on Tuesday, 13 March 2012 18:28:59 UTC