RE: Mozilla/Cisco API Proposal

 

>-----Original Message-----
>From: Ian Hickson [mailto:ian@hixie.ch] 
>Sent: den 18 juli 2011 20:46
>To: Stefan Håkansson LK
>Cc: public-webrtc@w3.org; public-audio@w3.org
>Subject: Re: Mozilla/Cisco API Proposal
>
>On Mon, 18 Jul 2011, Stefan Håkansson LK wrote:
>>
>> In
>> 
><http://datatracker.ietf.org/doc/draft-ietf-rtcweb-use-cases-and-requi
>> rements/> there is a use case that uses two cameras: Hockey game 
>> viewer. There one camera is used to show the game, the other 
>a view of 
>> a person attending the match (and participating in an on-line 
>> discussion about the game).
>> 
>> I guess that if the web app gets access to both cameras switching 
>> views would be simple to implement. You could be lazy and transmit 
>> both videos all the time but select which one is displayed in the 
>> video element, or you could enable/disable tracks or add/remove 
>> streams at the sending side if you like to save transmission.
>
>The current API already supports this -- you'd just get the 
>two camera using getUserMedia(), add both streams, and then 
>mute the one you don't want to send.
I agree. This was kind of the point I was trying to make.
>
>We may want to consider a way to merge MediaStream objects 
>into one (currently there's only the ability to fork), so that 
>you could just send one stream and dynamically pick one or the 
>other. But it seems if we do that we'd probably also want to 
>be able to do fades and other transitions for the video, and 
>of course we'd probably want this to hook right into the audio 
>API. It's unfortunate that this area is split across multiple 
>working groups at the W3C.
I agree to that we have a challenge in coordinating things between the WGs.

A couple of other reqs (the ones about being able to spatialize and mix audio streams and sound objects - F13 & F15) are really in the Audio WG space.

It would be a real shame if the APIs end up incompatible.

Stefan

Received on Tuesday, 19 July 2011 06:04:51 UTC