- From: David Woolley <david@djwhome.demon.co.uk>
- Date: Sun, 4 Apr 2004 12:29:50 +0100 (BST)
- To: w3c-wai-ig@w3.org
> Now consider a modern word processor like MS Word. Even if 10 different = > languages are used in 10 paragraphs on the same page, the spell checker = > has no problem identifying the change of natural language and to apply = Certainly not true of Word 97 (which I use in the office) or earlier. > the right dictionary for each paragraph. No indication of change of = > natural language is needed by the author. > =20 > Maybe it is more realistic in many situations to leave indication of = > change in natural language to user agents than to expect web page = > authors to do the job. Web page authors should probably still indicate = If you are going to do language guessing, in this context, the correct place is in the consolidator or other authoring tool, i.e. client side. > indicating change of natural language to a handful of user agents and = > save millions of web page authors for a lot of work? The most problematic cases are those where language guessing wouldn't work anyway, because there are odd words and phrases in a different dominant language. Where language guessing is possible, the work should be no more than a once only enabling of language guessing in their authoring toolset, or if they author with notepad, the running of a language guessing processor just before uploading. It doesn't help for language negotiation, as I think it is well beyond the state of the art to correctly rank the quality of individual language versions, and the user agent would have to fetch them all if it were to do the guessing. Also, note that character sets are only a weak indicator of language.
Received on Sunday, 4 April 2004 07:29:59 UTC