SpeechSynthesisUtterance lang attribute usage

Hi all,

We are looking into implementing the Web Speech API. There's one thing 
that I don't understand about the SpeechSynthesisUtterance interface, 
namely why are there two lang attributes, one directly in the 
SpeechSynthesisUtterance and another one in SpeechSysthesisVoice.

 From my reading of the spec, it's the SpeechSynthesisUtterance.lang 
that tells the UA what language should be used for synthesis, to quote: 
"This attribute specifies the language of the speech synthesis for the 
utterance..."
On the other hand the voice objects have a predetermined language 
associated with them.

So what should happen if I set SpeechSynthesisUtterance.voice to a voice 
with lang="en-US", but I set the SpeechSynthesisUtterance.lang to "pl-PL"?

-- 
Janusz Majnert
Samsung R&D Institute Poland
Samsung Electronics

Received on Friday, 19 July 2013 08:59:32 UTC