Re: SpeechSynthesisUtterance lang attribute usage

Hi,

Out of curiosity, can you clarify for what browser / system you're looking
into implementing the web speech API on?

My understanding is that if you set the "lang" attribute of the utterance
but not "voice", the system should automatically pick the best matching
voice for that language, using its own heuristics.

In your example, I the voice would take precedence. The English voice would
try to speak the phrase. The synthesizer would be passed the information
that the phrase's actual language is "pl-PL", so it's possible in theory
the synthesizer could try to adapt.

- Dominic

On Fri, Jul 19, 2013 at 1:58 AM, Janusz Majnert <j.majnert@samsung.com>wrote:

> Hi all,
>
> We are looking into implementing the Web Speech API. There's one thing
> that I don't understand about the SpeechSynthesisUtterance interface,
> namely why are there two lang attributes, one directly in the
> SpeechSynthesisUtterance and another one in SpeechSysthesisVoice.
>
> From my reading of the spec, it's the SpeechSynthesisUtterance.lang that
> tells the UA what language should be used for synthesis, to quote: "This
> attribute specifies the language of the speech synthesis for the
> utterance..."
> On the other hand the voice objects have a predetermined language
> associated with them.
>
> So what should happen if I set SpeechSynthesisUtterance.voice to a voice
> with lang="en-US", but I set the SpeechSynthesisUtterance.lang to "pl-PL"?
>
> --
> Janusz Majnert
> Samsung R&D Institute Poland
> Samsung Electronics
>
>
>

Received on Friday, 19 July 2013 16:23:32 UTC