- From: Dominic Mazzoni <dmazzoni@google.com>
- Date: Fri, 19 Jul 2013 09:23:04 -0700
- To: Janusz Majnert <j.majnert@samsung.com>
- Cc: "public-speech-api@w3.org" <public-speech-api@w3.org>
- Message-ID: <CAFz-FYwSeGaoNq+cd6VXDTdDY2h5oLSvLuxmO5sKHkeE8A58MQ@mail.gmail.com>
Hi, Out of curiosity, can you clarify for what browser / system you're looking into implementing the web speech API on? My understanding is that if you set the "lang" attribute of the utterance but not "voice", the system should automatically pick the best matching voice for that language, using its own heuristics. In your example, I the voice would take precedence. The English voice would try to speak the phrase. The synthesizer would be passed the information that the phrase's actual language is "pl-PL", so it's possible in theory the synthesizer could try to adapt. - Dominic On Fri, Jul 19, 2013 at 1:58 AM, Janusz Majnert <j.majnert@samsung.com>wrote: > Hi all, > > We are looking into implementing the Web Speech API. There's one thing > that I don't understand about the SpeechSynthesisUtterance interface, > namely why are there two lang attributes, one directly in the > SpeechSynthesisUtterance and another one in SpeechSysthesisVoice. > > From my reading of the spec, it's the SpeechSynthesisUtterance.lang that > tells the UA what language should be used for synthesis, to quote: "This > attribute specifies the language of the speech synthesis for the > utterance..." > On the other hand the voice objects have a predetermined language > associated with them. > > So what should happen if I set SpeechSynthesisUtterance.voice to a voice > with lang="en-US", but I set the SpeechSynthesisUtterance.lang to "pl-PL"? > > -- > Janusz Majnert > Samsung R&D Institute Poland > Samsung Electronics > > >
Received on Friday, 19 July 2013 16:23:32 UTC