Re: [webrtc-pc] Clarify that getSynchronizationSources() should return information even if the track has no sink (<video> tag) (#2240)

I don't believe that it adds sufficient clarity to the API.

`getSynchronizationSources()` returning an empty dictionary (in cases without a sink) remain a valid interpretation for as long as an implementation is allowed to postpone the consumption of packets indefinitely.

For example:

1. Things are setup without a sink.
2. Packets are received and inserted into the jitter buffer.
3. Packets are not consumed since there's no sink.
4. `getSynchronizationSources()` is called.
5. Waits 20 seconds.
6. Sink is added.
7. Packets that were queued up are now consumed and turned into frames that gets delivered to the track.
8. `getSynchronizationSources()` is called.

If (3) is allowed then (4) must return an empty dictionary, or else (8) won't capture the info for the frames in (7) despite them being delivered within the last 10 seconds.

I think that the clarification/spec-change needs to be on the interaction between `RTCRtpReceiver` and its `MediaStreamTrack`. E.g. declaring that packets must be consumed, and treating frames as if they are delivered to the track, regardless of whether there's a sink or not.

An implementation may do certain invisible optimizations, such as not spending CPU cycles on the actual audio/video codec work, but "behaving as frames are not delivered to the track until a sink is added" doesn't seem to be one of them.

ps. Some of the stats in `GetStats()` like `RTCAudioReceiverStats.totalSamplesDuration` should be similarly affected by sink-less tracks since they too depend on audio frames being delivered to the track.

-- 
GitHub Notification of comment by chxg
Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/2240#issuecomment-517444527 using your GitHub account

Received on Thursday, 1 August 2019 20:25:07 UTC