Re: [webrtc-encoded-transform] Expose RTCEncoded*Frame interfaces in Worklets (#226)

For video, just no.

Decoding audio on real-time threads has been seen in very specific scenarios, and can be done safely, if the decoder is real-time safe, etc. all the usual stuff. Encoding, I've never seen it and I don't really know how useful it would be or what it would bring.

The Web Codecs API however won't work well (or at all) in `AudioWorklet`, because the `AudioWorklet` is inherently and by necessity a synchronous environment, and the Web Codecs API an asynchronous API. I had proposed a synchronous API for Web Codecs, and explained why (in https://github.com/w3c/webcodecs/issues/19), but we haven't done it.

I side with @youennf on this. Communicating with an `AudioWorkletProcessor` is not hard and can easily be done extremely efficiently. Any claim of being able to do a "more performant" implementation need to be backed by something.

Once we have apps for which the limiting factor is the packetization latency or something in that area, we can revisit. 

-- 
GitHub Notification of comment by padenot
Please view or discuss this issue at https://github.com/w3c/webrtc-encoded-transform/issues/226#issuecomment-1985604108 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Friday, 8 March 2024 12:24:58 UTC