Re: Poor error events in SpeechSynthesis API

This all looks great, a quick tangent below..

On 03/07/2013 10:11 AM, Dominic Mazzoni wrote:
>           "language-not-supported"  // unsure if necessary, since
> should use getVoices()

The way I understand the voice matching, an API consumer could simply
send an utterance to speak with a language specified without doing
getVoices() before, and trusting the user agent to find the best match.
For example:

The app specifies en-GB in an utterance. First the UA searches for an
en-GB default voice, if that is not satisfied, it searches for an en-GB
non-default voice, if that is not satisfied, it searches for the first
"en" prefixed default voice, if no luck it searches for an "en" prefixed
non-default voice. So hypothetically, the matching voice might be en-US.
If there are no English voices, then a "language-not-supported" error
could be raised because there is no voice that could synthesis the
utterance.

Received on Thursday, 7 March 2013 18:26:36 UTC