dissenting opinion (was Re: RE: checkpoint 3.4 again)

aloha, y'all!

this is a post that should have gone out a few days ago, but was been
trapped in a corrupted outbox...  anyway, here goes:

AnneP: If others are pleased with this version (I have read Joe's
criticisms), then lets go with what Wendy has written.

GJR: i, for one, am NOT pleased with this version, and like joe, i also
strenuously object on the strongest possible terms, and assert the
insufficiency of mathematics to compute my objections to the checkpoints
which have been proposed so far...

on 28 july 2001, joe clark outlined two cases:

<quote>
In Case 1, authors may use illustrations; if they make that choice,
the illustrations must meet certain goals.

In Case 2, authors have no choice in the matter and must-- in every
case, without exception, and irrespective of appropriateness,
applicability, illustration skill, budget, or undue hardship--
provide illustrations.
<unquote>

and then commented:

<quote>
Mathematicians have not yet identified a number large enough to
measure my opposition to Case 2, for reasons that have been generally
well-explained by others.
<unquote>

GJR: it has been my understanding that WCAG2 is attempting to communicate
the following: authors
should use multi-modal mechanisms to reinforce key concepts/functionalities;
if authors make that choice, the illustrations must meet certain goals

Case 2 cannot stand the test of logic -- we cannot demand that a webmaster
who has never seen create meaningful/useful illustrations for every concept?
if all concepts/functions must be illustrated using visual stimuli, why not
a parallel requirement for aural stimuli?  and if aural, then again, how can
someone who has never heard be expected to provide meaningful and
appropriate earcons?  if EVERYTHING needs a visual and/or aural equivalent,
then no collection of
pre-canned icons and earcons can possibly suffice, even if provided by a
Triple-A ATAG-compliant authoring tool, which would come pre-packaged with
equivalent alternatives pre-associated with multimedia/multi-modal
content...

initially, i was going to cast my support behind keeping 3.4 as it was in
the first drafts of WCAG, with the following "success criterion":

<SUCCESS checkpoint="3.4">
Express key concepts and functionality using metadata.
</SUCCESS>

but that led me to conclude that checkpoint 3.4, if abstracted (and i
believe it should be, along the lines sketched by kynn and al) is partially
subsumed, if not actually covered, by WCAG2 checkpoint 1.3

<quote>
1.3 Use markup or a data model to provide the logical structure of content.
</quote>

in that, a machine could use the markup or the data model per 1.3 in order
to satisfy 3.4, at least as far as indication/illustration of the logical
structure of content and the relationships between elements...

so, i suppose that i would really only support the inclusion of a checkpoint
that stated:

Supplement content with multimedia. [Priority 3]

or

Use multi-modal content to reinforce concepts and data contained in single
modality formats/forms [Priority 1]

which is really an ultra-abstraction of 1.1, so i tried again, and came up
with:

Utilize markup that enables multi-modal content to be associated with key
functionalities, structural concepts, and data which is contained in single
modality formats/forms. [Priority 1]

which leaves me i don't know where, only quite far from what i hear anne
expressing and quite far from what is in the new draft -- that every single
thing needs an author-defined multi-modal equivalent, although i _think_
that the last attempt could be construed to cover it...

gregory.

Received on Wednesday, 1 August 2001 16:20:58 UTC