A proposal for how we would use the SDP that comes out of the MMUSIC interm

Heading into the MMUSIC interim, I think it would be wise to have some idea
of what SDP we would want to use (what subset of what MMUSIC decides on
would WebRTC support?).  Having spoken to several people in the MMUSIC WG
working on the draft for simulcast and several people in the WebRTC WG,
I've come up with the following proposal for what subset we could use, and
what the API around it would look like.  I hope this proposal and ensuing
discussion help us prepare for the interim.


At the f2f, we had several ideas with different qualities which I describe
her in JSON form.

Plan A: {api: little, sdp:big}
Plan B: {api: big,    sdp: none}
Plan C: {api: none,   sdp:none}

This proposal (let it be Plan X) tries to combine the best of both and have
the qualities {api: moderate, sdp: moderate}.


Here is a graphical representation of the qualities:
+
API Big | B
|
|
|
|
|
|
|
|
| X
|
|
|
|
|
|
API Small | A
| C
+------------------------------------+
SDP Small SDP Big



Here's how it would work with an example:

var video = pc.addTransceiver(track,
  {send: {encodings: [{scale: 1.0}, {scale: 2.0}, {scale: 4.0}]}});

// This now has a *subset* of the SDP from MMUSIC
pc.createOffer().then(signalOffer)
var answer = ... ; //  wait for the answer

// This accepts a *subset* of the SDP from MMUSIC
pc.setRemoteDescription(answer);

// The app can later decide to change parameters, such as
// stop sending the top layer
var params = video.sender.getParameters();
params.encodings[0].active = false;
video.sender.setParameters(params);


The key parts are:
1.  It builds on top of addTransceiver, expanding {send: true} to {send:
...} where the ... can express the desire for simulcast.
2.  It uses a subset of the SDP from MMUSIC in the offer and answer.
3.  It gives JS some control over each layer: .active, .maxBandwidth,
.scale.
4.  The additions to the API and to the SDP are simple.



Here's the WebIDL:

dictionary RTCRtpTransceiverInit {
  (boolean or RTCRtpParameters) send;
  // .. the rest as-is
}

dictionary RTCRtpEncodingParameters {
  double scale;  // Resolution scale
  unsigned long rsid;  // RTP Source Stream ID
  // ... the rest as-is
}

And here's the *subset* of the SDP from MMUSIC we could use in the offer
(obviously subject to change based on the results of the interim):

m=video ...
...
a=rsid send 1
a=rsid send 2
a=rsid send 3
a=simulcast rsids=1,2,3


And here's the *subset* of the SDP from MMUSIC we could use in the answer:
m=video ...
...
a=rsid recv 1
a=rsid recv 2
a=rsid recv 3
a=simulcast rsids=1,2,3


That's it.  That's all I think we need: a simple addition to addTransceiver
plus a simple subset of the SDP from MMUSIC.


The last thing I would note is that I propose we *do not* use the entirety
of the MMUSIC draft in WebRTC.  In particular, not the PT overloading or
the more extensive attributes that don't map well to
RTCRtpEncodingParameters (max-width, max-height, max-fps, max-fs, max-pps).

Received on Friday, 9 October 2015 02:08:04 UTC