Re: [webrtc-pc] RTCPriorityType is not documented at all

> What people SHOULD be doing in the simulcast case when congestion is expected is to assign high priority to the minor bitrate stream (so that it will always be sent) and low priority to the largest stream (so that its packets will be dropped first).

I don't think I understand the definition of `RTCPriorityType`, then. The two example implementation strategies are:

>    o  When the available bandwidth is known from the congestion control
>       algorithm, configure each codec and each data channel with a
>       target send rate that is appropriate to its share of the available
>       bandwidth.

Meaning the "minor bitrate stream" would be configured with a higher bitrate? And the other is:

>    o  When congestion control indicates that a specified number of
>       packets can be sent, send packets that are available to send using
>       a weighted round robin scheme across the connections.

This sounds like it would result in the higher bitrate layer either losing too many packets to be decodable (if the sender drops packets), or building in latency (if it doesn't).

What we actually want is for the higher bitrate layer to be turned off completely when there's too much congestion. `RTCPriorityType` doesn't accomplish this.

Maybe it was never intended for simulcast at all, and should be moved up to the `RtpParameters` level (instead of `RtpEncodingParameters`)? It's pretty odd that it explicitly says "this isn't defined for multiple streams coming from a single media source", and that's exactly how we're appearing to use it.

-- 
GitHub Notification of comment by taylor-b
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/1888#issuecomment-396326533 using your GitHub account

Received on Monday, 11 June 2018 17:44:34 UTC