- From: Janusz Majnert <j.majnert@samsung.com>
- Date: Fri, 19 Jul 2013 10:58:56 +0200
- To: public-speech-api@w3.org
Hi all, We are looking into implementing the Web Speech API. There's one thing that I don't understand about the SpeechSynthesisUtterance interface, namely why are there two lang attributes, one directly in the SpeechSynthesisUtterance and another one in SpeechSysthesisVoice. From my reading of the spec, it's the SpeechSynthesisUtterance.lang that tells the UA what language should be used for synthesis, to quote: "This attribute specifies the language of the speech synthesis for the utterance..." On the other hand the voice objects have a predetermined language associated with them. So what should happen if I set SpeechSynthesisUtterance.voice to a voice with lang="en-US", but I set the SpeechSynthesisUtterance.lang to "pl-PL"? -- Janusz Majnert Samsung R&D Institute Poland Samsung Electronics
Received on Friday, 19 July 2013 08:59:32 UTC