Re: A proposal for how we would use the SDP that comes out of the MMUSIC interm

Some comments on proposal; I'm not sure I would advocate adding it yet.


Den 09. okt. 2015 04:06, skrev Peter Thatcher:
> Heading into the MMUSIC interim, I think it would be wise to have some
> idea of what SDP we would want to use (what subset of what MMUSIC
> decides on would WebRTC support?).  Having spoken to several people in
> the MMUSIC WG working on the draft for simulcast and several people in
> the WebRTC WG, I've come up with the following proposal for what subset
> we could use, and what the API around it would look like.  I hope this
> proposal and ensuing discussion help us prepare for the interim.
> 
> 
> At the f2f, we had several ideas with different qualities which I
> describe her in JSON form. 

I'm hesitant to call these "plans", they're not fleshed out enough, but
I think I can map your points to some of the things I remember...

> 
> Plan A: {api: little, sdp:big}

This would be the "just set the number of layers and let SDP do the
rest" plan?

> Plan B: {api: big,    sdp: none}

This would be the "drop multicast from WebRTC-1.0 API, leave it to
lower-layer APIs in NV" plan?

> Plan C: {api: none,   sdp:none}

This would be the "just say no" plan?

> 
> This proposal (let it be Plan X) tries to combine the best of both and
> have the qualities {api: moderate, sdp: moderate}.  
> 
> 
> Here is a graphical representation of the qualities:
>                                                     
>               +                                     
>      API Big  | B                                   
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |              X                      
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>               |                                     
>    API Small  |                              A      
>               | C                                   
>               +------------------------------------+
>                                                     
>                 SDP Small                  SDP Big  
>                                                     
> 
> 
> 
> Here's how it would work with an example:
> 
> var video = pc.addTransceiver(track, 
>   {send: {encodings: [{scale: 1.0}, {scale: 2.0}, {scale: 4.0}]}});
> 
> // This now has a *subset* of the SDP from MMUSIC
> pc.createOffer().then(signalOffer)
> var answer = ... ; //  wait for the answer
> 
> // This accepts a *subset* of the SDP from MMUSIC
> pc.setRemoteDescription(answer);
> 
> // The app can later decide to change parameters, such as 
> // stop sending the top layer
> var params = video.sender.getParameters();
> params.encodings[0].active = false;
> video.sender.setParameters(params);
> 
> 
> The key parts are:
> 1.  It builds on top of addTransceiver, expanding {send: true} to {send:
> ...} where the ... can express the desire for simulcast.

Actually this is the API part I like the least. While we did this for
constraints, the result is not so pretty. I'd prefer to just add a
dictionary member called "sendLayers". If "send" is modifiable in the
transceiver (as I think it should be), this also allows us to prepare a
transceiver for simulcast without turning on the spigot at first
negotiation.

Syntax seems fine, except that I don't know what it means; it is
strictly spatial resolution differences, or is it a bandwidth target
that can be hit by any combination of spatial and temporal shenanigans?


> 2.  It uses a subset of the SDP from MMUSIC in the offer and answer.
> 3.  It gives JS some control over each layer: .active, .maxBandwidth,
> .scale.
> 4.  The additions to the API and to the SDP are simple.
> 
> 
> 
> Here's the WebIDL:
> 
> dictionary RTCRtpTransceiverInit {
>   (boolean or RTCRtpParameters) send;
>   // .. the rest as-is
> }
> 
> dictionary RTCRtpEncodingParameters {
>   double scale;  // Resolution scale
>   unsigned long rsid;  // RTP Source Stream ID
>   // ... the rest as-is
> }
> 
> And here's the *subset* of the SDP from MMUSIC we could use in the offer
> (obviously subject to change based on the results of the interim):
> 
> m=video ...
> ...
> a=rsid send 1
> a=rsid send 2
> a=rsid send 3
> a=simulcast rsids=1,2,3
> 
> 
> And here's the *subset* of the SDP from MMUSIC we could use in the answer:
> m=video ...
> ...
> a=rsid recv 1
> a=rsid recv 2
> a=rsid recv 3
> a=simulcast rsids=1,2,3
> 
> 
> That's it.  That's all I think we need: a simple addition to
> addTransceiver plus a simple subset of the SDP from MMUSIC.
> 
> 
> The last thing I would note is that I propose we *do not* use the
> entirety of the MMUSIC draft in WebRTC.  In particular, not the PT
> overloading or the more extensive attributes that don't map well to
> RTCRtpEncodingParameters (max-width, max-height, max-fps, max-fs, max-pps).
> 
> 
> 
> 
>  

Received on Friday, 9 October 2015 12:25:50 UTC