Re: [mediacapture-transform] Expectations/requirements for VideoFrame and AudioData timestamps (#80)

With spatial scalability, you can have multiple `encodedChunks` with the same `timestamp` (e.g. base layer as well as spatial enhancement layers).  What happens in this situation? Would the decoder only output a single `VideoFrame`  (e.g. for the highest operating point) or would it output more than one `VideoFrame` with the same `timestamp` value?  Without configuring the operating point in the decoder, how does the decoder know whether to produce a `VideoFrame` immediately from a base layer `encodedChunk` or instead to wait until it receives `encodedChunk`s for spatial enhancement layers? 

GitHub Notification of comment by aboba
Please view or discuss this issue at using your GitHub account

Sent via github-notify-ml as configured in

Received on Wednesday, 9 February 2022 22:11:29 UTC