W3C home > Mailing lists > Public > w3c-wai-gl@w3.org > October to December 2004

RE: FW: [Techniques] Drft General Technique for GL 3.1 L2 SC1

From: John M Slatin <john_slatin@austin.utexas.edu>
Date: Tue, 28 Dec 2004 13:08:09 -0600
Message-ID: <6EED8F7006A883459D4818686BCE3B3B7511EA@MAIL01.austin.utexas.edu>
To: "Joe Clark" <joeclark@joeclark.org>, "WAI-GL" <w3c-wai-gl@w3.org>

Thanks, Clark.

Joe Clark wrote:
Look, this whole guideline is *insane* and unnecessary. Apparently, PiGS

and the other elites of this working group are continuing to carry on 
blithely as though this were a remotely wise or even *theoretically* 
implementable guideline.
I'm perfectly prepared to support removing this requirement from the
Guidelines if and when the community of readers lets us know that it it
does not help people with disabilities and so should not be required.
So, if other readers out there share Mr. Clark's concerns, please let us
know.  In the meantime, my assignment is to write and/or edit material
for inclusion in the General Techniques document based on the current
wording of the Guidelines. 
Joe Clark continues:

Written languages have homographs. (I note that, in keeping with the
habit of ignoring any contrary evidence, nobody but me has bothered to
that term. It refers to words with the same spelling and different 
pronunciations.) Homographs are an intrinsic feature. You cannot expect 
authors to weed through their entire text, carefully considering every 
multiple reading for every word (in Japanese, every on-yomi and
two other terms you're ignoring), and then specifically mark up each and

every word that has a different pronunciation when used *somewhere
no matter how improbable that other context.
For what it's worth, the success criterion in the Guidelines doesn't
actually require that every word be marked up. It requires that
pronunciations and meanings can be "programmatically located.  The draft
technique I submitted says only that Ruby annotation provides a
technique for meeting the requirement.  It does not say that Ruby is the
*only* way of satisfying the requirement.  

As Mr. Clark so eloquently puts it, this particular academic PiG is not
a professional Web developer; I count on such professionals to come up
with creative solutions to difficult problems.   Oink.

Joe rises to new heights of fury:
Get the hell out of authors' way. We've got better things to do to make 
our sites *actually accessible* than micromanage pronunciations of our 
*written* words. Pronunciations are somebody else's problem when we're 

Joe continues:
... ; it is a category error on the Working Group's part to force 
writers to consider both the written and spoken forms simultaneously-- 
always and everywhere, for every word. 

Interesting and important point. Thank you. Does your last phrase--
"always and everywhere, for every word"-- mean that there might be times
and places and words for which it *would* be important to require
information about pronunciation and meaning? Or was that just a
rhetorical flourish?

Joe says:

Moreover, Slatin's suggested use of <ruby> works exclusively in XHTML
and with notable browser deficiencies. (By the way, does it work in
If not, you'll drop it like a hot potato, won't you?) 

Another interesting point. I was under the impression that IE doesn't
support XHTML 1.1 and that Opera does.  But IE appeared to render my
Ruby example correctly (even without a doctype declaration) and Opera (8
beta) did not.  JAWS 6.0 apparently rendered it correctly, but in all
honesty I'm not certain how it *should* have rendered it so I'd be glad
of more information on that point. Would you be willing to check and
report back, Joe?

Essentially, you 
would force every author in e.g. Japanese to use only XHTML 1.1
to comply with WCAG. I thought we merely had to use markup according to 
specification; here you're forcing authors to use the markup you

And can you imagine *every page* of Japanese on the Web littered with 
furigana? How about every page of Hebrew littered with nikud?

Like the even more atrocious and infuriating guideline to make the 
ambiguous definition of every single polysemous word rectifiable by 
automation, this guideline:

* does not help actual people with disabilities, who have to deal with 
homographs anyway, as all readers must;

* is impossible to implement;

* insults authors; and

* overreaches the Working Group's mandate.

It is, further, astonishing that ivory-tower academics like Slatin and 
Vanderheiden delude themselves that these guidelines are even desirable
*possible*. Nonetheless, it's par for the course that you ignore
evidence. You're so wedded to this nonsense-- which none of you could 
actually comply with; then again, you aren't working Web developers--
you're pushing right ahead and cooking up half-arsed *techniques*.

It's not gonna work, people. Keep proposing this sort of nonsense and 
eventually you'll start reading-- out on that Web you seem to hate so
much-- of a WCAG 2.0 backlash before it's even released.

Do you really want people dismissing the WCAG Working Group as 
micromanaging E.U.-style language fascists? If so, keep it up.


Sorry, but in my ivory tower one man's heated assertion that something
will or will not work doesn't count as "evidence."  If in fact there is
clear evidence that the proposed success criterion will not benefit
people with disabilities then the criterion won't survive into the final
recommendation.  And if the technique I proposed is unworkable or
inappropriate or unnecessary I'll gladly kiss it good-bye.

Slatin, PiG-- 

     Joe Clark | joeclark@joeclark.org
     Accessibility <http://joeclark.org/access/>
     Expect criticism if you top-post
Received on Tuesday, 28 December 2004 19:08:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 16 January 2018 15:33:51 UTC