- From: Jan-Ivar Bruaroey via GitHub <sysbot+gh@w3.org>
- Date: Thu, 20 Aug 2020 19:25:49 +0000
- To: public-webrtc-logs@w3.org
I've [critiziced](https://github.com/mozilla/standards-positions/issues/170#issuecomment-509873541) the current Web Speech API for being too tightly coupled to microphone and default speaker output. I suggest that WG work to plug into existing audio sources and sinks in the platform through `MediaStreamTrack` (there's precedence in [web audio](https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaStreamDestination)). Output selection would then fall out for free. E.g. ```js audioElement.srcObject = speechSynthesis.createMediaStreamDestination(); audioElement.setSinkId(await navigator.mediaDevices.selectAudioOutput({deviceId})); speechSynthesis.speak(new SpeechSynthesisUtterance("Hello world!")); ``` -- GitHub Notification of comment by jan-ivar Please view or discuss this issue at https://github.com/w3c/mediacapture-output/issues/102#issuecomment-677855169 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Thursday, 20 August 2020 19:25:51 UTC