W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > October to December 2003

RE: Internationalization (was [w3c-wai-ig] <none>)

From: <jon@spin.ie>
Date: Wed Oct 1 12:54:10 2003
To: "John Foliot - WATS.ca" <foliot@wats.ca>, W3c-Wai-Ig <w3c-wai-ig@w3.org>
Message-Id: <20031001165409.A324B136AB@dr-nick.w3.org>

> I don't have the answer, but unfortunately it seems to me that at some point
> visual clues (be they flags or bitmaps of the "text" with appropriate
> alt
> text rendered in Roman scripting) would have to be the most accessible
> solution (pragmatism vs. standards zealously).  Thoughts?

I could see most, though not all, of the characters on that page. The reason I could see them is I have a font installed (Code2000) which sacrifices quality to enable it to pack in a large number of glyphs. While this won't be a high-quality representation it is a useful fallback behaviour.

Apple have a Last Resort font <http://developer.apple.com/fonts/LastResortFont/>. It offers a fallback behaviour for characters no glyph can be found for by displaying a glyph which represents the Unicode block it came from - which at least gives you a clue in finding an appropriate font to display it properly.

With a font like Code2000 (and Code2001 which goes beyond the BMP) multi-lingual text should at least be legible. Okay we need to get those fonts onto users machines first, but I think that's going to happen sooner than the machines reading text in images.

Another possibility is font-embedding, embedding would allow you to provide a font that contains glyphs for all the characters you used. However I'd prefer to be able to embed a font to be used only the the users personal choices couldn't cope with the text in question, that I think would be the best-of-all-possible-worlds solution.
Received on Wednesday, 1 October 2003 12:54:10 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 13 October 2015 16:21:25 UTC