Re: Synchronous getUserMedia proposal

On 11/15/2012 09:52 PM, Martin Thomson wrote:
> On 14 November 2012 14:20, Travis Leithead
> <travis.leithead@microsoft.com> wrote:
>> I am surprised that I haven't heard much more pushback on this
>> design approach. I suppose that means it's an inevitable
>> transition.
>
> Don't sound so fatalistic. :)
>
>> A few questions: 1. If the user agent doesn't have any cameras what
>> happens? (Perhaps a null value is returned? A fake Media Stream in
>> the ENDED state?) Generally speaking, what do we do with all the
>> old error conditions?
>
> My thought was that all error conditions become manifest in the
> "ended" event on the stream.  That includes: no camera, user denies
> permission, etc...

That is also consistent with my thinking so far (on the other hand, we 
have talked about having additional events to allow the app to 
distinguish e.g. no camera from no permission, but that would on the 
third(sic) hand make things worse from a finger printing perspective).

>
>> 2. How are multiple cameras handled supported? By multiple calls to
>> the API as before? It seems like this aspect of the old design
>> needs to change.
>
> The old design was a little strange.  Multiple calls to getUserMedia
> would return different cameras if they were called together (prior
> to reaching stable state).  Was there any specific expectation
> around what would happen in subsequent calls otherwise?
>
> It seems like it would be reasonable to offer some amount of control
> over this: - constraint for selecting a specific source - constraint
> for selecting a new source

I agree to the above, and I think we have been discussing, but not come 
to a conclusion (apart from the "stable state" solution) for how to 
handle the situation where the app wants more than one audio and one 
video track.

> Source identities might have to be
> obtained through live streams within the current browsing context.
> I'd be concerned if the identifier were persistent, or discoverable
> without user consent.
>
>> An alternative idea is to use getUserMedia as an
>> approval/activation method for track "promises". As such you'd need
>> a way to create appropriate track "placeholders" and getUserMedia
>> would "upgrade" these to actual media-containing tracks. Consider:
>> var videoPlaceholder = new MediaStreamTrack("video"); var
>> audioPlaceholder = new MediaStreamTrack("audio"); var placeholderMS
>> = new MediaStream([videoPlaceholder, audioPlaceholder]);
>>
>> The above objects are in a "not started" state and are not tied to
>> any source [yet]. Then getUserMedia will try to bind all the
>> placeholder tracks to real media sources and may succeed, fail, or
>> partially succeed, given the number of requested placeholder
>> tracks. Reporting for each binding failure/success will be in the
>> form of events on each respective track placeholder.

Where would you apply constraints? E.g., the app wants only one video 
track, but it wants it from the front facing camera; where would that be 
expressed (is it at creation of videoPlaceHolder or at getUserMedia)?

Received on Friday, 16 November 2012 13:09:08 UTC