W3C home > Mailing lists > Public > www-international@w3.org > July to September 2002

Determining if Unicode / appropriate glyphs installed on client

From: Mustafa Ali \(UrduWord.com\) <newsletters@urduword.com>
Date: Tue, 9 Jul 2002 00:37:25 -0500
Message-ID: <000901c2270a$b5c3a7d0$0fb0a842@alidesktop>
To: <www-international@w3.org>

Hello,

Is there an efficient method to determine if a certain multilingual content
will display correctly on a browser (considering localized text encoding,
language font repository, and text-processing algorithms), regardless of the
user's accept-language preferences? Help in light of both HTML and XHTML
appreciated.

For example, using Arial Unicode MS and IE6, I can easily read websites in
Arabic, Japanese, and most other languages. However, the browser does not
(practically cannot) send the entire range of languages it supports.

Taking my own website (http://www.urduword.com) as an example: This website
is primarily in English, but does include Urdu-language (Arabic script)
content. Currently, I use emulated GIF images instead of actual Unicode to
ensure that the multilingual content appears on all browsers and systems.
However still, this is a very crude and inefficient implementation for users
with the required glyphs and processing capabilities. At the same time, I
don't want to abandon my large user base on systems that don't have
Urdu-support by default (many OS's, Openwave WAP, etc).

Again, is there an efficient method to determine localized text encoding,
language font respository, and text-processing algorithms supported by the
client? How does CC/PP handle the issue?

Thanks for taking your time,
Mustafa Ali
mustafa@urduword.com
Received on Tuesday, 9 July 2002 01:37:38 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 2 June 2009 19:16:59 GMT