Split simulcast encodings into N RtpSenders

Hi,

The rules for sending simulcast (let's say VP8 and H264 at the same
time over different streams) imply having a single RtpSender with two
encodings (one for VP8 and another one for H264).

But when it comes to the RtpReceiver, passing those parameters to
receive() means that the receiver should be able to switch between
both streams at any time. At explained in #558 that is hard and
requires RFC 6051 and so on.

So it's clear IMHO that, in the receiver side, it's much better to
create two separate receivers so also different MediaStreamTracks for
VP8 and H264, and render the active one.

If so, why don't we encourage simulcast via N RtpSenders having each
one a single encoding? That would produce a proper 1:1 mapping of
RtpSenders and RtpReceivers.

NOTE: Of course I'm talking about some kind of SFU scenario in which
the SFU can decide which encoding to forward at any time.

Is there any impediment regarding this topic due to WebRTC 1.0? In
other words: would it make sense to request to the WebRTC WG that
PeerConnection.getSenders() retrieves as many RtpSenders as available
encodings?

Well, given that those N simulcast encodings are expressed within a
single m= line I can guess the problem... The RtpTransceiver should
then have N RtpSenders rather than just one. Is that?

Thanks a lot.



[#558] https://github.com/openpeer/ortc/issues/558


-- 
Iñaki Baz Castillo
<ibc@aliax.net>

Received on Thursday, 2 June 2016 21:13:10 UTC