- From: Young, Milan <Milan.Young@nuance.com>
- Date: Thu, 24 Jan 2013 23:07:21 +0000
- To: Eitan Isaacson <eitan@mozilla.com>, "public-speech-api@w3.org" <public-speech-api@w3.org>
Hello Eitan, The URIs in the constructors to speech recognition and synthesis were intended for both: * Internal browser consumption (the opaque string you mention) * A means of pointing the host browser to an external speech service. The browser would need some published protocol to communicate with the remote. I request we preserve both use cases. Regards > -----Original Message----- > From: Eitan Isaacson [mailto:eitan@mozilla.com] > Sent: Wednesday, January 23, 2013 11:24 AM > To: public-speech-api@w3.org > Subject: voiceURI usage > > Hi. > > I have been trying to wrap my head around the use of voiceURI, both as an > attribute of SpeechSynthesisVoice and SpeechSynthesisUtterance. > > It seems to me that a UA implementation would have some internal (and > potentially extendable) speech services. So as I understand it, the URI would > only have internal meaning to the UA, and should really be treated as an > opaque immutable token by content scripts. If this is the case, the usage of > voiceURI is unclear to me, and the suggestion of removing voiceURI as filed in > bug 20529[1] makes sense. > > On the other hand, maybe I don't understand how voiceURIs should be used. > For example, could you optionally provide the speech service part of the uri > without the trailing voice identifier and have the service choose the default > voice for that service (this differs from the global UA default voice)? Or could > you provide an HTTP URI in a SpeechSynthesisUtterance where some standard > GET request would be used to retrieve the synthesized audio? > > Could someone please provide examples of what voiceURIs might look like? > Both for local and remote services. > > Cheers, > Eitan. > > 1. https://www.w3.org/Bugs/Public/show_bug.cgi?id=20529 >
Received on Thursday, 24 January 2013 23:07:52 UTC