W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > February 2022

Re: [mediacapture-transform] Expectations/requirements for VideoFrame and AudioData timestamps (#80)

From: Bernard Aboba via GitHub <sysbot+gh@w3.org>
Date: Wed, 09 Feb 2022 22:11:26 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-1034241218-1644444685-sysbot+gh@w3.org>
With spatial scalability, you can have multiple `encodedChunks` with the same `timestamp` (e.g. base layer as well as spatial enhancement layers).  What happens in this situation? Would the decoder only output a single `VideoFrame`  (e.g. for the highest operating point) or would it output more than one `VideoFrame` with the same `timestamp` value?  Without configuring the operating point in the decoder, how does the decoder know whether to produce a `VideoFrame` immediately from a base layer `encodedChunk` or instead to wait until it receives `encodedChunk`s for spatial enhancement layers? 

GitHub Notification of comment by aboba
Please view or discuss this issue at https://github.com/w3c/mediacapture-transform/issues/80#issuecomment-1034241218 using your GitHub account

Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Wednesday, 9 February 2022 22:11:29 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 6 May 2023 21:19:56 UTC