W3C home > Mailing lists > Public > public-webrtc@w3.org > December 2013

Re: Update of Doohickey/AddTrack proposal

From: Adam Bergkvist <adam.bergkvist@ericsson.com>
Date: Thu, 19 Dec 2013 08:07:13 +0100
Message-ID: <52B29B21.8010305@ericsson.com>
To: Justin Uberti <juberti@google.com>, "public-webrtc@w3.org" <public-webrtc@w3.org>
Great. It's time to advance with this proposal.

On 2013-12-19 04:35, Justin Uberti wrote:
> Based on some feedback I've received, I've slightly updated my proposal
> (originally inspired by Martin) on doohickeys and replacing AddStream
> with AddTrack. See below:
>
> ------------------------------------------------------------------------
>
> I suggest we call the SendDoohickeys RtpSenders and the corresponding
> receive-side objects RtpReceivers. These objects allow control of how a
> MediaStreamTrack is encoded and sent on the wire, including "hold"
> state, prioritization, and multiple encodings (e.g. simulcast).

Do we need to have RTP in the names? I know that these names won't be 
used much in code, but we need to talk about them :) and, when we add 
the RTC-prefix the name will start with two acronyms (which isn't very 
nice IMO). Perhaps we could call them RTCMediaSender/RTCMediaReceiver? 
The network behavior is given by "sender"/"receiver" and "media" 
distinguish them from the data channel.

> You get a RtpSender as a return value from AddTrack (which replaces
> AddStream). You get a RtpReceiver as an argument to the new onaddtrack
> callback (which replaces onaddstream). The actual track object can be
> obtained as a property from the RtpReceiver (see below).
>
> For getting access to ICE/DTLS info, both RtpSenders and RtpReceivers
> can also have a reference to a DtlsTransport object, which will have its
> own state variables, including the RTCIceConnectionState of that
> particular transport, and the .remoteCertificates for the DTLS
> connection. This allows applications to monitor the state of individual
> transports, as well as inspect the certificates for individual transports.
>
> The actual interface is as follows:
>
> interface DtlsTransport {
>    attribute RTCIceConnectionState state;
>    attribute sequence<ArrayBuffer> remoteCertificates;
>    //... more stuff as needed
> };
>
> interface RtpSender {
>    readonly attribute MediaStreamTrack track;
>    readonly attribute DtlsTransport transport;
>    // various tweakable attributes
>    attribute bool active;  // controls "am I sending RTP"
>    attribute PriorityLevel priority;  // for relative bandwidth allocation
>    // for multiple encodings: simulcast (audio or video), layered coding
>    // specify the codec to use, resolution, bitrate, and any
> dependencies for each encoding
>    attribute sequence<EncodingParams> encodings;
> };
>
> interface RtpReceiver {
>    readonly attribute Transport transport;
>    readonly attribute MediaStreamTrack track;
>    // more stuff as needed
> };
>
> partial interface PeerConnection {
>    RtpSender addTrack(MST);  // replaces addStream
>    void removeTrack(RtpSender);  // replaces removeStream
>    readonly attribute sequence<RtpSender> senders;
>    readonly attribute sequence<RtpReceiver> receivers;

Nit: these need to be getter methods.

>    EventHandler onaddtrack;  // replaces onaddstream; onremovestream is
> not needed

I'm a bit confused by the comment above. Since we don't use streams here 
onremovestream goes away naturally, but did you mean that we don't need 
onremovetrack?

> };
>
> For backcompat, addStream, removeStream, getLocalStreams,
> getRemoteStreams, and onaddstream can be trivially polyfilled in JS, so
> there is minimal impact on current applications.
>
> All together, the pipeline looks like this:
>
> Source ---> MST ---> RtpSender ---> Transport ---> (The Internet) --->
> Transport ---> RtpReceiver ---> MST ---> <video/>
>
> Each RtpSender/Receiver references a single MST, although a single
> RtpSender/Receiver can send/receive multiple encodings (e.g. simulcast).
> There are N RtpSenders/Receivers per Transport; N is controlled by the
> policy specified for BUNDLE.
>

Diff of needed changes to our simple RTCPeerConnection example:

--- a/webrtc_simple_example_old.js
+++ b/webrtc_simple_example.js
@@ -6,6 +6,9 @@ var pc;
  function start() {
      pc = new RTCPeerConnection(configuration);

+    // used to render incoming tracks
+    var receiveStream = new MediaStream();
+
      // send any ice candidates to the other peer
      pc.onicecandidate = function (evt) {
          if (evt.candidate)
@@ -17,15 +20,20 @@ function start() {
          pc.createOffer(localDescCreated, logError);
      }

-    // once remote stream arrives, show it in the remote video element
-    pc.onaddstream = function (evt) {
-        remoteView.src = URL.createObjectURL(evt.stream);
+    // once remote media arrives, add it to a stream to be rendered
+    pc.onaddtrack = function (evt) {
+        receiveStream.addTrack(evt.receiver.track)
+
+        if (!remoteView.srcObject)
+            remoteView.srcObject = receiveStream;
      };

-    // get a local stream, show it in a self-view and add it to be sent
+    // get a local stream, show it in a self-view and add the tracks to 
be sent
      navigator.getUserMedia({ "audio": true, "video": true }, function 
(stream) {
          selfView.src = URL.createObjectURL(stream);
-        pc.addStream(stream);
+
+        stream.getAudioTracks().forEach(pc.addTrack);
+        stream.getVideoTracks().forEach(pc.addTrack);
      }, logError);
  }

/Adam
Received on Thursday, 19 December 2013 07:07:38 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 15:19:37 UTC