- From: henbos via GitHub <sysbot+gh@w3.org>
- Date: Wed, 12 Sep 2018 13:40:53 +0000
- To: public-webrtc@w3.org
henbos has just created a new issue for https://github.com/w3c/webrtc-pc: == getSynchronizationSources (and getContriburingSources) should work for video too == Multiple things in the spec hint that this is only for audio RTP streams, e.g. - "When the first audio frame contained in an RTP packet is delivered to the RTCRtpReceiver's MediaStreamTrack for playout, the user agent MUST queue a task to update the relevant information..." - audioLevel says that audioLevel must be calculated based on the audio data even if the header extension containing this value is missing, concluding that the member must never be absent for SSRCs. This would not be a valid conclusion if getSynchronizationSources() was applicable to video RTP streams too. The spec should be updated to support both audio and video cases. The "source" (SSRC or CSRC) and timestamp are both valuable pieces of information for a receiver, whether receiving audio or video. For example, they can be used to figure out whose ("source" used as an identifier) stream is currently being received or if a stream is "active" (if timestamp is recent). These APIs are designed to work in real-time. Trying to get this information through other means such as getStats() is not performance-viable. Any reason this was written for only audio in mind? Please view or discuss this issue at https://github.com/w3c/webrtc-pc/issues/1983 using your GitHub account
Received on Wednesday, 12 September 2018 13:40:56 UTC