- From: Ben Wagner via GitHub <sysbot+gh@w3.org>
- Date: Tue, 04 May 2021 11:00:26 +0000
- To: public-webrtc-logs@w3.org
> > One use case to consider is https://ai.googleblog.com/2018/04/looking-to-listen-audio-visual-speech.html > > This is indeed a good use case. It seems covered AFAIK by getUserMedia+MediaStreamAudioSourceNode+AudioWorklet. Apologies if I'm missing something obvious, but it doesn't seem possible to process both the audio and video inputs in an AudioWorklet. Nor does it seem possible for the audio data to be obtained outside of the AudioWorklet so that the audio and video can be processed together in a regular worker. -- GitHub Notification of comment by dogben Please view or discuss this issue at https://github.com/w3c/mediacapture-transform/issues/29#issuecomment-831855773 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Tuesday, 4 May 2021 11:00:28 UTC