- From: Joe Clark <joeclark@joeclark.org>
- Date: Tue, 28 Dec 2004 18:26:21 +0000 (UTC)
- To: WAI-GL <w3c-wai-gl@w3.org>
Look, this whole guideline is *insane* and unnecessary. Apparently, PiGS and the other elites of this working group are continuing to carry on blithely as though this were a remotely wise or even *theoretically* implementable guideline. Written languages have homographs. (I note that, in keeping with the PiGS' habit of ignoring any contrary evidence, nobody but me has bothered to use that term. It refers to words with the same spelling and different pronunciations.) Homographs are an intrinsic feature. You cannot expect authors to weed through their entire text, carefully considering every multiple reading for every word (in Japanese, every on-yomi and kun-yomi, two other terms you're ignoring), and then specifically mark up each and every word that has a different pronunciation when used *somewhere else*, no matter how improbable that other context. Get the hell out of authors' way. We've got better things to do to make our sites *actually accessible* than micromanage pronunciations of our *written* words. Pronunciations are somebody else's problem when we're writing; it is a category error on the Working Group's part to force writers to consider both the written and spoken forms simultaneously-- always and everywhere, for every word. Then again, you're the same group of Mensa dropouts who write Level AAA as Level Triple-A because your pet screen reader can't enunicate an abbreviation (which it never occurred to you needs to be written inside <abbr></abbr> anyway). Moreover, Slatin's suggested use of <ruby> works exclusively in XHTML 1.1, and with notable browser deficiencies. (By the way, does it work in Jaws? If not, you'll drop it like a hot potato, won't you?) Essentially, you would force every author in e.g. Japanese to use only XHTML 1.1 documents to comply with WCAG. I thought we merely had to use markup according to specification; here you're forcing authors to use the markup you specify. And can you imagine *every page* of Japanese on the Web littered with furigana? How about every page of Hebrew littered with nikud? Like the even more atrocious and infuriating guideline to make the ambiguous definition of every single polysemous word rectifiable by automation, this guideline: * does not help actual people with disabilities, who have to deal with homographs anyway, as all readers must; * is impossible to implement; * insults authors; and * overreaches the Working Group's mandate. It is, further, astonishing that ivory-tower academics like Slatin and Vanderheiden delude themselves that these guidelines are even desirable or *possible*. Nonetheless, it's par for the course that you ignore contrary evidence. You're so wedded to this nonsense-- which none of you could actually comply with; then again, you aren't working Web developers-- that you're pushing right ahead and cooking up half-arsed *techniques*. It's not gonna work, people. Keep proposing this sort of nonsense and eventually you'll start reading-- out on that Web you seem to hate so very much-- of a WCAG 2.0 backlash before it's even released. Do you really want people dismissing the WCAG Working Group as micromanaging E.U.-style language fascists? If so, keep it up. -- Joe Clark | joeclark@joeclark.org Accessibility <http://joeclark.org/access/> Expect criticism if you top-post
Received on Tuesday, 28 December 2004 18:26:29 UTC