W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2012

RE: Synchronous getUserMedia proposal

From: Li Li <Li.NJ.Li@huawei.com>
Date: Fri, 16 Nov 2012 17:37:44 +0000
To: Stefan Hakansson LK <stefan.lk.hakansson@ericsson.com>, Martin Thomson <martin.thomson@gmail.com>
CC: Travis Leithead <travis.leithead@microsoft.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>, "public-media-capture@w3.org" <public-media-capture@w3.org>
Message-ID: <B60F8F444AAC9C49A9EF0D12D05E0942216D9539@szxeml535-mbx.china.huawei.com>
I like this approach from Travis based on a source-pipe/filter-sink model of streams where each part can be substituted without impacting the others.

In terms of where to put constraints, I think it's better to specify them on the pipe, e.g. new MediaStreamTrack("video"). This way, the JS can control the stream track based on what constraints it has satisfied.

It's also ok to specify all constraints in getUserMedia() to avoid potential conflicts between constraints in two different places. 
In that case, it is useful to have a read-only attribute on the pipe, e.g. videoPlaceHolder.constraints, to retrieve the constraints that it has satisfied.

Thanks.
Li

> -----Original Message-----
> From: Stefan Hakansson LK [mailto:stefan.lk.hakansson@ericsson.com]
> Sent: Friday, November 16, 2012 8:09 AM
> To: Martin Thomson
> Cc: Travis Leithead; Adam Bergkvist; public-webrtc@w3.org; public-
> media-capture@w3.org
> Subject: Re: Synchronous getUserMedia proposal
> 
> On 11/15/2012 09:52 PM, Martin Thomson wrote:
> > On 14 November 2012 14:20, Travis Leithead
> > <travis.leithead@microsoft.com> wrote:
> >> I am surprised that I haven't heard much more pushback on this
> >> design approach. I suppose that means it's an inevitable
> >> transition.
> >
> > Don't sound so fatalistic. :)
> >
> >> A few questions: 1. If the user agent doesn't have any cameras what
> >> happens? (Perhaps a null value is returned? A fake Media Stream in
> >> the ENDED state?) Generally speaking, what do we do with all the
> >> old error conditions?
> >
> > My thought was that all error conditions become manifest in the
> > "ended" event on the stream.  That includes: no camera, user denies
> > permission, etc...
> 
> That is also consistent with my thinking so far (on the other hand, we
> have talked about having additional events to allow the app to
> distinguish e.g. no camera from no permission, but that would on the
> third(sic) hand make things worse from a finger printing perspective).
> 
> >
> >> 2. How are multiple cameras handled supported? By multiple calls to
> >> the API as before? It seems like this aspect of the old design
> >> needs to change.
> >
> > The old design was a little strange.  Multiple calls to getUserMedia
> > would return different cameras if they were called together (prior
> > to reaching stable state).  Was there any specific expectation
> > around what would happen in subsequent calls otherwise?
> >
> > It seems like it would be reasonable to offer some amount of control
> > over this: - constraint for selecting a specific source - constraint
> > for selecting a new source
> 
> I agree to the above, and I think we have been discussing, but not come
> to a conclusion (apart from the "stable state" solution) for how to
> handle the situation where the app wants more than one audio and one
> video track.
> 
> > Source identities might have to be
> > obtained through live streams within the current browsing context.
> > I'd be concerned if the identifier were persistent, or discoverable
> > without user consent.
> >
> >> An alternative idea is to use getUserMedia as an
> >> approval/activation method for track "promises". As such you'd need
> >> a way to create appropriate track "placeholders" and getUserMedia
> >> would "upgrade" these to actual media-containing tracks. Consider:
> >> var videoPlaceholder = new MediaStreamTrack("video"); var
> >> audioPlaceholder = new MediaStreamTrack("audio"); var placeholderMS
> >> = new MediaStream([videoPlaceholder, audioPlaceholder]);
> >>
> >> The above objects are in a "not started" state and are not tied to
> >> any source [yet]. Then getUserMedia will try to bind all the
> >> placeholder tracks to real media sources and may succeed, fail, or
> >> partially succeed, given the number of requested placeholder
> >> tracks. Reporting for each binding failure/success will be in the
> >> form of events on each respective track placeholder.
> 
> Where would you apply constraints? E.g., the app wants only one video
> track, but it wants it from the front facing camera; where would that
> be
> expressed (is it at creation of videoPlaceHolder or at getUserMedia)?
> 
> 

Received on Friday, 16 November 2012 17:38:27 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 16 November 2012 17:38:27 GMT