W3C home > Mailing lists > Public > public-speech-api@w3.org > March 2013

Re: Poor error events in SpeechSynthesis API

From: Eitan Isaacson <eisaacson@mozilla.com>
Date: Thu, 07 Mar 2013 10:26:04 -0800
Message-ID: <5138DBBC.50401@mozilla.com>
To: public-speech-api@w3.org
This all looks great, a quick tangent below..

On 03/07/2013 10:11 AM, Dominic Mazzoni wrote:
>           "language-not-supported"  // unsure if necessary, since
> should use getVoices()

The way I understand the voice matching, an API consumer could simply
send an utterance to speak with a language specified without doing
getVoices() before, and trusting the user agent to find the best match.
For example:

The app specifies en-GB in an utterance. First the UA searches for an
en-GB default voice, if that is not satisfied, it searches for an en-GB
non-default voice, if that is not satisfied, it searches for the first
"en" prefixed default voice, if no luck it searches for an "en" prefixed
non-default voice. So hypothetically, the matching voice might be en-US.
If there are no English voices, then a "language-not-supported" error
could be raised because there is no voice that could synthesis the
utterance.
Received on Thursday, 7 March 2013 18:26:36 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 7 March 2013 18:26:37 GMT