Re: [webrtc-extensions] Add a requestKeyframe() API (#37)

It would be good to understand the use case.
For instance, would applications like to control the maximum time between key frames an encoder uses, with two extreme cases being key frame only and no key frame except in case of FIR?

A few additional questions:
- Would it also be useful for some audio codecs?
- Is it WebRTC specific or is there a use case for MediaRecorder as well?
- What does it mean for muted content? I guess the next frame would be a key frame, which might require waiting for 1 sec, so the promise would resolve after 1 sec. Is it correct?
- What should be the behaviour if this method is called 10 times synchronously? Should there be one key frame or should the next 10 frames be key frames?
- Is it an issue to be able to compute how long it will take to generate a key frame (say I call this API on a canvas track and after some time, I generate a canvas frame and compute the time between the canvas frame generation and the promise resolution)?
- This is using the operation chain, probably to keep consistency with the replaceTrack trick. Is that so? Is there any other reason? Is there a future need to control exactly which frame would get encoded as a key frame?

-- 
GitHub Notification of comment by youennf
Please view or discuss this issue at https://github.com/w3c/webrtc-extensions/pull/37#issuecomment-628488932 using your GitHub account

Received on Thursday, 14 May 2020 08:44:41 UTC