- From: Andrew Kirkpatrick <akirkpat@adobe.com>
- Date: Wed, 8 Oct 2014 17:22:50 +0000
- To: "Jens O. Meiert" <jens@meiert.com>
- CC: W3C Public Comments WCAG 2.0 <public-comments-wcag20@w3.org>
> 1) […] If a sighted user encounters text in a different language they > are able to view the text and determine if they are able to read the > language as they are able to view an accurate representation of the > information and make that determination. A non-sighted user > encountering text that is in a different language than the default > language of the page where the language is not correctly indicated > will hear information that will be difficult or impossible to identify even if the user understands the language. Are these actually different problems, or don’t they rather support the main criticism that language determination is not, in fact, an accessibility issue since it affects everyone? For the sighted user in this example the information is just as much difficult or impossible to identify. If someone says yyudysuyudusyd and claims it means something, we’re all at a loss. We agree that language determinism is an issue for everyone, but I'm not sure how people who are sighted are adversely affected when the language is not properly identified. If someone recognizes text as being in a different language through visual recognition of that language, then the underlying identification of the language is irrelevant for those users. However, if someone relies on text to speech and the lack of language identification results in an inaccurate rendering of the text (it does) then the user is disproportionately affected. > 2) Marking up all changes does take more time than not marking up > changes, but WCAG does not necessarily require that authors do this work themselves. > An author could choose to employ a tool or web-based service to > identify and properly indicate the language, if such a tool was available to them. Here, too, isn’t this an acknowledgment of another point of criticism that tools may be able to (and should) do the job? And isn’t it rather, odd to suggest tools should mark up language, when the very same technology could instead more conveniently be used to just process (e.g., read aloud) that otherwise-to-be-marked language correctly (and do away with the burden and eventually bloat of language code)? The working group isn't suggesting that it should or must be done in any particular way, just that the language needs to be identified correctly and the end user needs to be able to get the correct information. If in a few years time tools were available that identified all languages correctly and the support existed across browsers and assistive technologies, that would be great, and would be able to be part of a conformance claim. If the technology became so ubiquitous in the future it might even make us wonder why we need the language success criteria at all, and if that were true the WG might consider phasing them out in a future version of the guidelines. However, today that isn't true, so authors need to make sure that they are employing techniques that actually work, and unfortunately with today's tools, doing nothing is not an approach that ensures access. AWK No immediate comment needed, but for the record. Thanks for your and the group’s reconsideration of the matter. Best, Jens. -- Jens O. Meiert http://meiert.com/en/
Received on Wednesday, 8 October 2014 17:23:21 UTC