W3C home > Mailing lists > Public > public-webrtc-logs@w3.org > April 2020

Re: [mst-content-hint] Differentiate between speech for human and machine consumption (#39)

From: guest271314 via GitHub <sysbot+gh@w3.org>
Date: Wed, 08 Apr 2020 03:46:26 +0000
To: public-webrtc-logs@w3.org
Message-ID: <issue_comment.created-610735886-1586317585-sysbot+gh@w3.org>
On the other hand, "speech" input can be in the form of code, not necessarily audio output by a human, or audio output to speakers or a device, rather, a stream of bytes; markup, e.g., SSML https://www.w3.org/TR/2010/REC-speech-synthesis11-20100907/; International Phonetic Alphabet notation; ASML  https://www.w3.org/community/synthetic-media/wiki/Articulatory_Synthesis_Markup_Language; or other data structures. 

From the gist of this issue the consideration was for analysis of audio output input to a `MediaStreamTrack` (e.g., from a microphone), rather than a "codec" for different forms of streaming (encoded) "speech" data?

GitHub Notification of comment by guest271314
Please view or discuss this issue at https://github.com/w3c/mst-content-hint/issues/39#issuecomment-610735886 using your GitHub account
Received on Wednesday, 8 April 2020 03:46:28 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 6 May 2023 21:19:50 UTC