W3C home > Mailing lists > Public > public-media-capture@w3.org > December 2011

Re: First draft available

From: Ben Francis <ben@krellian.com>
Date: Wed, 30 Nov 2011 17:00:25 +0000
Message-ID: <CADKQpGS2e+BXZTKj8Mouhs2f+K4uo4XAS=ypTke+xV3uUkR6hw@mail.gmail.com>
To: public-media-capture@w3.org
On Wed, Nov 30, 2011 at 2:26 PM, Brian LeRoux <b@brian.io> wrote:

> But what if we have multiple cameras? Front and back are common enough now.
>
> navigator.cameras[0].addEventListener("end", function() {
>    console.log('camera has been turned off')
> }, false)
>


Is the thinking here that it may be necessary to stream from multiple
cameras/microphones simultaneously rather than allow the user agent UI to
choose one of multiple sources?

This seems to be the intention of the MediaStream interface (
http://www.w3.org/TR/2011/WD-webrtc-20111027/#mediastream) which allows for
multiple simultaneous video and audio tracks, but I'm not sure how common
this would be as a use case for capture from the local device. I suppose
you might want a stereoscopic camera or stereo microphone, or event some
kind of video-conferencing device that has several input streams in a
conference room/ studio...

The current draft says "user agents are encouraged default to using the
user's primary or system default camera and/or microphone", so do you think
the API should be more flexible than this?

Ben

-- 
Ben Francis
http://tola.me.uk
http://krellian.com
Received on Thursday, 1 December 2011 09:34:04 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:14:58 GMT