Issue 558: Split simulcast encodings into N RtpReceivers

https://github.com/openpeer/ortc/issues/558


Inaki said: 

" the only feasible way to handle switched streams in the receiver is by handling each potential stream in a separate RtpReceiver with a different MediaStreamTrack, which leads to 1:N relationship between RtpSenders and RtpReceivers."

[BA]  I don't think it helps to split each RTP stream in a received simulcast into its own separate MediaStreamTrack, because in order to render the video, you'd need to switch between the simulcast tracks which might need to happen multiple times per second.  

As I understand it, ORTC Lib has implemented general support for simulcast reception, where an RtpReceiver receives simulcast streams and outputs a single video track.  So it can be done.  My question (see PR https://github.com/openpeer/ortc/pull/538 ) is whether this should be required in ORTC API implementations or not. 

Since an SFU can splice together simulcast streams into a single stream sent to the receiver, where such an SFU is used an RtpReceiver does not necessarily need to support simulcast, only an RtpSender.  As a result, it is not clear to me that reception of multiple streams within an RtpReceiver needs to be required.

The other use case for receiving multiple streams and outputting a single video track is support for Multiple RTP Stream Single Transport (MRST) within Scalable Video Coding (SVC).  Today support for MRST is not very common:   However, implementations of VP8, VP9 and AV1 support only SRST transport, and most implementations of H.264/SVC also utilize SRST.  In Edge, we do support MRST in our H.264/SVC implementation, but in a codec-specific way that does not require general support within the ORTC API.   As a result, it is hard to make an argument that all ORTC implementations need to support MRST in a general way. 

Received on Friday, 3 June 2016 18:23:05 UTC