Re: [mediacapture-transform] Is MediaStreamTrackProcessor for audio necessary? (#29)

I think the question of whether something is necessary is the wrong one to ask, since arguably, nothing is necessary. 
For example, using getUserMedia+MediaStreamAudioSourceNode+AudioWorklet + some video processing API (such as MediaStreamTrackProcessor/MediaStreamTrackGenerator) in this context would be a lot more difficult than with having a symmetric API for audio and video.
For starters SharedArrayBuffer requires cross-origin isolation. The setup of MediaStreamAudioSourceNode+AudioWorklet on one hand and Video processing somewhere else using completely different APIs with different programming models adds even more friction.
Moreover, the unique advantages offered by AudioWorklet (e.g., real-time thread) do not apply to this specific use case.

I think this shows that there is real value in adding an audio version of the same API used for video.
Keeping the bug open to continue the discussion.


-- 
GitHub Notification of comment by guidou
Please view or discuss this issue at https://github.com/w3c/mediacapture-transform/issues/29#issuecomment-863937186 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Friday, 18 June 2021 10:29:30 UTC