- From: David Goldsmith <David_Goldsmith@taligent.com>
- Date: Tue, 17 Jan 1995 15:51:11 -0800
- To: Gavin Nicol <gtn@ebt.com>
- Cc: http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, html-wg@oclc.org, rick0@allette.com.au, www@unicode.org, john_jenkins@taligent.com
At 5:50 PM 1/17/95, Gavin Nicol wrote: >Yes, and I am very hesitant to suggest using any special purpose >codes. The problem is that there does need to be some standard (low >level) way of saying that some text is in Japanese, and some text is >in Chinese. Now, the real debate is how to represent this, and I think >the recent idea I proposed is not bad. This is really the crux of the matter. Why do you think there needs to be a *low level* way of differentiating Japanese and Chinese text? The WWW seems to operate quite well now without any way to differentiate German, English, French, or Italian text (all handled by 8859-1). What problems -- specifically -- would arise in typical WWW applications if such text is not tagged? How would lack of this information impede you when writing Unicode-capable servers and clients, and how would it impact end users? I fully agree that language (and font, and style, and ...) tags are useful and highly desirable at a high level, and support for this should be added to HTML (or at whatever level is appropriate). I don't think that language is any different from these other attributes, nor does it need special treatment. Doing it at a low level adds complexity and complicates clients and servers that use Unicode. There has to be a compelling reason to add this complexity. There has to be a problem that it solves. Given that work is in progress to add language information at a higher level (HTML 3) it seems to me that there would have to be an extraordinarily strong reason to add this information at a low level as well. David Goldsmith Senior Scientist Taligent, Inc. 10201 N. DeAnza Blvd. Cupertino, CA 95014-2233 david_goldsmith@taligent.com
Received on Tuesday, 17 January 1995 15:54:54 UTC