Re: [mediacapture-output] Directing Web Speech API audio to a specific output device? (#102)

> Hello! Have there been any discussions around giving developers the ability to direct speech generated via the Web Speech API SpeechSynthesis interface to a specific audio output? I've not been able to find any, and it seems like a fairly important feature.

Web Speech API does not define any speech synthesis algorithms and neither Chromium nor Firefox are shipped with a speech synthesis engine.

Web Speech API establishes a socket connection to Speech Dispatcher `speechd` https://github.com/brailcom/speechd. 

Web Speech API does not currently specify any means to capture audio output from `speechSynthesis.speak()`. 

Since Web Speech API simply sommunicates with locally installed speech synthesis engine, one approach would be no use Web Speech API at all. Rather, install one or more speech synthesis engines locally and communicate with the engine directly. For example, the output from `espeak-ng` https://github.com/espeak-ng/espeak-ng is 1 channel WAV, where `STDOUT` (raw binary data) from `$ espeak-ng --stdout 'test'` can be passed as a message to any origin parsed to `Float32Array` and set as `outputs` at `AudioWorkletProcessor.process()`, where a `MediaStream` can be used for output using `MediaStreamAudioDestinationNode`. This is one wortking version of using Native Messaging with `espeak-ng` to capture speech synthesis output https://github.com/guest271314/native-messaging-espeak-ng, will update the above to the version described above

-- 
GitHub Notification of comment by guest271314
Please view or discuss this issue at https://github.com/w3c/mediacapture-output/issues/102#issuecomment-691523901 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Saturday, 12 September 2020 17:51:31 UTC