Re: [webrtc-pc] Are sinks a prerequisite for decoding? getSynchronizationSources() without <video> (#2240)

@henbos, on a related note, while implementing this interface in Firefox a number of things occurred to me. This interface cuts across 3 layers, RTP, decoding, and presentation. Each layer has additional filters that will cause a packet not to be reported. That means that if one is interested in the data from the perspective of a particular layer (say RTP: is so-and-so delivering media to me?), one has to contend with the layers above it (i.e presentation: am I seeing media from so-and-so?). As long as there is this tight coupling, I can see this interface being revisited for new use cases, and new corner cases like the one above.

If there were an interface that reported the RTP data when it arrived, that could satisfy the use-cases where media isn't being presented. The rtpTimestamp + SSRC could correlate the two sets of data. If this were (one or more) callback(s) the browser could defer work until the callback was registered, and assured that each packet was observed. This would allow for option 2) without suppressing the availability of the RTP information.

On 3) I think that the presence of this interface should mandate decoding (as it used to when the audio level had to be decoded if it wasn't present).

On 1) does it make sense for this behavior to be implementation dependent? Is there a reason not to upgrade the MAY to MUST, ensuring that one still only gets source information for presented packets?



-- 
GitHub Notification of comment by na-g
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/2240#issuecomment-515524567 using your GitHub account

Received on Friday, 26 July 2019 16:51:17 UTC