- From: guest271314 via GitHub <sysbot+gh@w3.org>
- Date: Wed, 08 Apr 2020 03:46:26 +0000
- To: public-webrtc-logs@w3.org
On the other hand, "speech" input can be in the form of code, not necessarily audio output by a human, or audio output to speakers or a device, rather, a stream of bytes; markup, e.g., SSML https://www.w3.org/TR/2010/REC-speech-synthesis11-20100907/; International Phonetic Alphabet notation; ASML https://www.w3.org/community/synthetic-media/wiki/Articulatory_Synthesis_Markup_Language; or other data structures. From the gist of this issue the consideration was for analysis of audio output input to a `MediaStreamTrack` (e.g., from a microphone), rather than a "codec" for different forms of streaming (encoded) "speech" data? -- GitHub Notification of comment by guest271314 Please view or discuss this issue at https://github.com/w3c/mst-content-hint/issues/39#issuecomment-610735886 using your GitHub account
Received on Wednesday, 8 April 2020 03:46:28 UTC