W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2013

Re: Using tracks instead of streams

From: Justin Uberti <juberti@google.com>
Date: Thu, 14 Nov 2013 12:38:06 -0800
Message-ID: <CAOJ7v-2nm4cENCHerFrU1vr7u7BMUtEhCLbrEk+RH=xVmOFssQ@mail.gmail.com>
To: Martin Thomson <martin.thomson@gmail.com>
Cc: "Cullen Jennings (fluffy)" <fluffy@cisco.com>, Adam Bergkvist <adam.bergkvist@ericsson.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
specifically for a simple audio call, only 2 lines changes:

var stream = myGetUserMedia({audio:true});
var pc = new RTCPeerConnection(null, null);
--> pc.addTrack(stream.getAudioTracks()[0]);  // or could call pc.addStream
polyfill, which does the code on the left internally
var offer = pc.createOffer(null);
pc.setLocalDescription(offer);
...
pc.setRemoteDescription(answer);
--> callback: onaddtrack(var receiver) { audioTag.src = receiver.track; }
 // or could call onaddstream via polyfill



On Thu, Nov 14, 2013 at 10:21 AM, Martin Thomson
<martin.thomson@gmail.com>wrote:

> On 14 November 2013 08:06, Cullen Jennings (fluffy) <fluffy@cisco.com>
> wrote:
> > sure but what would it look like to set up a simple audio call at both
> the caller and callee side from JS point of view, just trying to get my
> head around what you are proposing.
>
> Given that existing applications don't need to change, the example
> could be just the same as any of the many simple examples that are
> already out there.
>
> If you wanted extra control (hold, etc...), then I'm sure that you can
> work out how to acquire the objects you need.  It involves a little
> bit of rummaging through collections, but that's just a natural
> consequence of building APIs with god objects.
>
Received on Thursday, 14 November 2013 20:38:53 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:36 UTC