W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > January to March 2017

Re: The murky intersection of accessibility and internationalization

From: Andrew Cunningham <andj.cunningham@gmail.com>
Date: Mon, 9 Jan 2017 15:35:14 +1100
Message-ID: <CAOUP6K=dapqo+krPNqeygN4wQo0aGmj4YoFspZavbUfWMaAu2g@mail.gmail.com>
To: "Sean Murphy (seanmmur)" <seanmmur@cisco.com>
Cc: WAI Interest Group <w3c-wai-ig@w3.org>
Hi Sean,

Yes, there are BCP47 language subtags for relevant languages, assuming
the correct language tag is applied.

Visual representation isn't a problem if appropriate fonts are
installed or web fonts applied, and appropriate font stack is
declared. Much more problematic is fonts are missing, font stacks
aren't designed appropriately. Mobile platforms can be fun though, you
may need to jailbreak (or root) it to get appropriate fonts installed.
The samsung device I have at home has Myanmar Unicode support. The
same handset in SE Asia will come with pseudo-unicode support instead
of Unicode support. Same device, same manufacturer, different
encoding.

In theory for a web page you can use js libraries that detect and
convert content Zawgyi <> Unicode client side and apply appropriate
font. But that doesn't address other Burmese encodings, and would
probably not be effective on a site with needs to support multiple
Myanmar script languages, and a wider range of encodings.

But this is all visual, the characters/letters aren't necessarily what
the browser thinks they are. And the browser has no inbuilt way of
knowing what encoding is really being used, and thus knowing what
characters are in the data. And this will severely impact non-visual
access to data.

Getting the font stack wrong introduces additional problems. Burmese
on the Centrelink (Department of Human Services) website - federal
government - Identifies a Unicode font in the CSS but the data is
non-Unicode, so the text is visually garbled.

In terms of screen readers, I know of work on screen readers and more
generic TTS work that was underway for Burmese. The solutions were
Unicode based, and in some cases did not provide non-Unicode support.
TTS solutions for Myanmar script languages would need to be able to
identify or determine language and encoding. HTML is fairly straight
forward, probably only three encodings you need to deal with IF you
know that the content is actually Burmese.

For other file formats ... TTS would need to handle many more
encodings ... and have the added complexity of not necessarily knowing
what the language is.

Andrew

Andrew Cunningham
andj.cunningham@gmail.com


On 9 January 2017 at 14:42, Sean Murphy (seanmmur) <seanmmur@cisco.com> wrote:
> Andrew,
>
> Do the language tag code cover the languages of concern?
>
> Visually I can see the issues you have highlighted being a problem if there are no font support.
>
> In the world of screen readers, there is three components I can think of which could cause an  issue:
> * Does the screen reader support the language by default or do you have to get third party language TTS?
> * Is there a TTS (synthesizer) support for the language. So when the screen reader  detects the language code, it switches to the correct TTS.
> * Braille from my knowledge does not support UTF-8. The correct Braille language drivers need to be available for the screen reader to support the language.
>
> Everything else you outlined seems to be fine in my small neck of the woods.
>
> Sean Murphy
Received on Monday, 9 January 2017 04:35:47 UTC

This archive was generated by hypermail 2.3.1 : Monday, 9 January 2017 04:35:48 UTC