W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1998

LANG - general discussion

From: Charles McCathieNevile <charlesn@srl.rmit.EDU.AU>
Date: Sat, 14 Nov 1998 12:35:49 +1100 (EST)
To: Al Gilman <asgilman@access.digex.net>
cc: w3c-wai-ua@w3.org
Message-ID: <Pine.SUN.3.91.981114122221.24986D-100000@sunrise>
Al said that as he undersands it Japanese is the most phonetic major 
language around.

That is pretty true. Japanese is built out of phonetic characters, and 
teh possible ways in which juxtaposition of characters can affect the 
pronunciation are very few and very regular, whereas in many European 
languages they are much greater and much less regular - English is the 
extreme example of this so far as I know. Basque also has a lot of 
possibilities, but greater regularity.

However, this can lead to problems if the language spoken is not known. 
When I have been in Vietnam I leanred enough vietnamese to buy a beer or 
a meal. My intonation was actually pretty good, which is unusual for 
beginners. (I cheated - I had learned a few other things first). But I am 
very tall and blond and red-bearded. So people listening to me expected 
me to be speaking a european language. It would take them about a minute 
to realise that I was speaking their native language. Earlier in the 
piece, and sometimes in Australia, where I don't practise it enough, they 
simply never realise.

Given that most speech synthesisers are ordinary at best, the confusion 
is likely to arise much more often. Having an idea of which language is 
being used could be helpful because it would provide the speech 
synthesiser with clues about how to get the pronunciation right, and 
could also simply state what the language is, providing the listener with 
clues about what to listen for (or whether to simply skip to the next bit 
in a language they understand.

Charles McCathieNevile
Received on Friday, 13 November 1998 20:39:49 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:49:21 UTC