RE: Interacting with WebRTC, the Web Audio API and other external sources

Hi everyone,

As I'm finally looking to start using the Web Audio API, it seems a missed
opportunity that it can only be used to capture through mic, especially
when you can't pick the audio source (e.g. headset vs built-in mic). I'd
like to +1 the following proposal to support providing an input stream
which would not only extend the ability to use getUserMedia to provide the
right stream OR enable to pass previously recorded audio streams or MP3
from the backend. What do you think? and what would be next steps to help
push this?

Many thanks!

"1) Transcripts for (live) communication.

While the specification does not mandate a maximum duration of a
speech input stream, this suggestion is most appropriate for
implementations utilizing a local recognizer. Allowing MediaStreams to
be used as an input for a SpeechRecognition object, for example
through a new "inputStream" property as an alternative to the start,
stop and abort methods, would enable authors to supply external input
to be recognized. This may include, but is not limited to, prerecorded
audio files and WebRTC live streams, both from local and remote
parties."

*Clement Wehrung* | Senior Product Manager | Fuze

-- 
*Confidentiality Notice: The information contained in this e-mail and any

attachments may be confidential. If you are not an intended recipient, you

are hereby notified that any dissemination, distribution or copying of this

e-mail is strictly prohibited. If you have received this e-mail in error,

please notify the sender and permanently delete the e-mail and any

attachments immediately. You should not retain, copy or use this e-mail or

any attachment for any purpose, nor disclose all or any part of the

contents to any other person. Thank you.*

Received on Tuesday, 5 March 2019 14:49:34 UTC