RE: Internationalization (was [w3c-wai-ig] <none>)

> I don't have the answer, but unfortunately it seems to me that at some point
> visual clues (be they flags or bitmaps of the "text" with appropriate
> alt
> text rendered in Roman scripting) would have to be the most accessible
> solution (pragmatism vs. standards zealously).  Thoughts?

I could see most, though not all, of the characters on that page. The reason I could see them is I have a font installed (Code2000) which sacrifices quality to enable it to pack in a large number of glyphs. While this won't be a high-quality representation it is a useful fallback behaviour.

Apple have a Last Resort font <http://developer.apple.com/fonts/LastResortFont/>. It offers a fallback behaviour for characters no glyph can be found for by displaying a glyph which represents the Unicode block it came from - which at least gives you a clue in finding an appropriate font to display it properly.

With a font like Code2000 (and Code2001 which goes beyond the BMP) multi-lingual text should at least be legible. Okay we need to get those fonts onto users machines first, but I think that's going to happen sooner than the machines reading text in images.

Another possibility is font-embedding, embedding would allow you to provide a font that contains glyphs for all the characters you used. However I'd prefer to be able to embed a font to be used only the the users personal choices couldn't cope with the text in question, that I think would be the best-of-all-possible-worlds solution.

Received on Wednesday, 1 October 2003 12:54:10 UTC