W3C home > Mailing lists > Public > public-webrtc@w3.org > November 2012

RE: New API surface - inbound outbound streams/tracks

From: Sunyang (Eric) <eric.sun@huawei.com>
Date: Thu, 15 Nov 2012 08:25:33 +0000
To: Jim Barnett <Jim.Barnett@genesyslab.com>, Tommy Widenflycht (ᛏᚮᛘᛘᚤ) <tommyw@google.com>, Adam Bergkvist <adam.bergkvist@ericsson.com>
CC: "public-webrtc@w3.org" <public-webrtc@w3.org>, Travis Leithead <travis.leithead@microsoft.com>
Message-ID: <9254B5E6361B1648AFC00BA447E6E8C32AED8066@szxeml545-mbx.china.huawei.com>
According to latest webrtc draft

interface AudioMediaStreamTrack : MediaStreamTrack {
    readonly attribute boolean canInsertDTMF<http://dev.w3.org/2011/webrtc/editor/webrtc.html#widl-AudioMediaStreamTrack-canInsertDTMF>;
    void insertDTMF<http://dev.w3.org/2011/webrtc/editor/webrtc.html#widl-AudioMediaStreamTrack-insertDTMF-void-DOMString-tones-long-duration> (DOMString tones, optional long duration);
};

So, If we call insertDTMF, we are using audioTrack.insertDTMF(“xxxx”), why can’t we know  which track the DTMF is sent on?

Yang
Huawei

From: Jim Barnett [mailto:Jim.Barnett@genesyslab.com]
Sent: Monday, November 12, 2012 10:43 PM
To: Tommy Widenflycht (ᛏᚮᛘᛘᚤ); Adam Bergkvist
Cc: public-webrtc@w3.org; Travis Leithead
Subject: RE: New API surface - inbound outbound streams/tracks

If DTMF is sent via PeerConnection, rather than on a specific track, can there ever be ambiguity about which Track the DTMF is actually sent on, and could this ambiguity ever cause problems on the receiving end?  I can’t personally think of a case where there would be a problem, but it’s worth a few moments’ consideration.


-        Jim

From: Tommy Widenflycht (ᛏᚮᛘᛘᚤ) [mailto:tommyw@google.com]
Sent: Monday, November 12, 2012 8:08 AM
To: Adam Bergkvist
Cc: public-webrtc@w3.org; Travis Leithead
Subject: Re: New API surface - inbound outbound streams/tracks

You are proposing to introduce an extremely complicated inheritance structure instead of adding 3 or 4 methods on RTCPeerConnection. How is that better? And even more important how is this to be explained to our users?

pc.addStream(stream);
var outboundStream = pc.localStreams.getStreamById(stream.id<http://stream.id/>);
var outboundAudio = outboundStream.audioTracks[0]; // pending syntax
if (outboundAudio.canSendDTMF)
    outboundAudio.insertTones("123", ...);

Having the same MediaStream magically change type feels just weird. To be compared to

pc.addStream(stream);
if (pc.canSendDTMF(...))
  pc.sendDTMF(...);

I agree that the second example won't win any prices for "Best designed API of the year" but it is infinitely simpler and to the point.
On Fri, Nov 9, 2012 at 1:14 PM, Adam Bergkvist <adam.bergkvist@ericsson.com<mailto:adam.bergkvist@ericsson.com>> wrote:
Hi

A while back I sent out a proposal [1] on API additions to represent streams that are sent and received via a PeerConnection. The main goal was to have a natural API surface for the new functionality we're defining (e.g. Stats and DTMF). I didn't get any feedback on the list, but I did get some offline.

I've updated the proposal to match v4 of Travis' settings proposal [2] and would like to run it via the list again.

Summary of the main design goals:
- Have a way to represent a stream instance (witch tracks) that are sent (or received) over a specific PeerConnection. Specifically, if the same stream is sent via several PeerConnection objects, the sent stream is represented by different "outbound streams" to provide fine grained control over the different transmissions.

- Avoid cluttering PeerConnection with a lot of new API that really belongs on stream (and track) level but isn't applicable for the local only case. The representations of sent and received streams and tracks (inbound and outbound) provides the more precise API surface that we need for several of the APIs we're specifying right now as well as future APIs of the same kind.

Here are the object structure (new objects are marked with *new*). Find examples below.

AbstractMediaStream *new*
|
+- MediaStream
|   * WritableMediaStreamTrackList (audioTracks)
|   * WritableMediaStreamTrackList (videoTracks)
|
+- PeerConnectionMediaStream *new*
    // represents inbound and outbound streams (we could use
    // separate types if more flexibility is required)
    * MediaStreamTrackList (audioTracks)
    * MediaStreamTrackList (videoTracks)

MediaStreamTrack
|
+- VideoStreamTrack
|  |
|  +- VideoDeviceTrack
|  |   * PictureDevice
|  |
|  +- InboundVideoTrack *new*
|  |   // inbound video stats
|  |
|  +- OutboundVideoTrack *new*
|      // control outgoing bandwidth, priority, ...
|      // outbound video stats
|      // enable/disable outgoing (?)
|
+- AudioStreamTrack
   |
   +- AudioDeviceTrack
   |
   +- InboundAudioStreamTrack *new*
   |   // receive DTMF (?)
   |   // inbound audio stats
   |
   +- OutboundAudioStreamTrack *new*
       // send DTMF
       // control outgoing bandwidth, priority, ...
       // outbound audio stats
       // enable/disable outgoing (?)

=== Examples ===

// 1. ***** Send DTMF *****

pc.addStream(stream);
// ...

var outboundStream = pc.localStreams.getStreamById(stream.id<http://stream.id>);
var outboundAudio = outboundStream.audioTracks[0]; // pending syntax

if (outboundAudio.canSendDTMF)
    outboundAudio.insertTones("123", ...);


// 2. ***** Control outgoing media with constraints *****

// the way of setting constraints in this example is based on Travis'
// proposal (v4) combined with some points from Randell's bug 18561 [3]

var speakerStream; // speaker audio and video
var slidesStream; // video of slides

pc.addStream(speakerStream);
pc.addStream(slidesStream);
// ...

var outboundSpeakerStream = pc.localStreams
        .getStreamById(speakerStream.id);
var speakerAudio = outboundSpeakerStream.audioTracks[0];
var speakerVideo = outboundSpeakerStream.videoTracks[0];

speakerAudio.priority.request("very-high");
speakerAudio.bitrate.request({ "min": 30, "max": 120,
                               "thresholdToNotify": 10 });
speakerAudio.bitrate.onchange = speakerAudioBitrateChanged;
speakerAudio.onconstraintserror = failureToComply;

speakerVideo.priority.request("medium");
speakerVideo.bitrate.request({ "min": 500, "max": 1000,
                               "thresholdToNotify": 100 });
speakerAudio.bitrate.onchange = speakerVideoBitrateChanged;
speakerVideo.onconstraintserror = failureToComply;

var outboundSlidesStream = pc.localStreams
        .getStreamById(slidesStream.id);
var slidesVideo = outboundSlidesStream.videoTracks[0];

slidesVideo.priority.request("high");
slidesVideo.bitrate.request({ "min": 600, "max": 800,
                              "thresholdToNotify": 50 });
slidesVideo.bitrate.onchange = slidesVideoBitrateChanged;
slidesVideo.onconstraintserror = failureToComply;


// 3. ***** Enable/disable on outbound tracks *****

// send same stream to two different peers
pcA.addStream(stream);
pcB.addStream(stream);
// ...

// retrieve the *different* outbound streams
var streamToA = pcA.localStreams.getStreamById(stream.id<http://stream.id>);
var streamToB = pcB.localStreams.getStreamById(stream.id<http://stream.id>);

// disable video to A and disable audio to B
streamToA.videoTracks[0].enabled = false;
streamToA.audioTracks[0].enabled = false;

======

Please comment and don't hesitate to ask if things are unclear.

/Adam

----
[1] http://lists.w3.org/Archives/Public/public-webrtc/2012Sep/0025.html

[2] http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/proposals/SettingsAPI_proposal_v4.html

[3] https://www.w3.org/Bugs/Public/show_bug.cgi?id=15861


Received on Thursday, 15 November 2012 08:27:06 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 15 November 2012 08:27:07 GMT