Re: Checkpoint on testability

At 02:16 PM 12/21/2000 , Leonard R. Kasday wrote:
>Per my action item from last call here's a first cut at a new guideline:

After going back and forth with William on this, I'm going to
look over this proposal once more and offer constructive advice
on how this could be adapted to a model that I'd find more
palatable.

>Guideline X.  Design for so that testability can most easily be verified.
>Pages should be designed to minimize amount of human effort needed to confirm accessibility.

First, can I confirm that the first sentence is the actual
guideline proposed, and the second is followup explanatory
text, not the actual body of the guideline?  I ask this
question because I think the first is straightforward and
clear, while the second is objective and vague, and adds
nothing to the first.

Taken together, the second sentence weakens the guideline; if
omitted, the guideline is strong enough to stand by itself.

Of course there seems to be a grammatical problem in the
phrasing; I assume the 'for' is a mistake that would be
removed in editing.

I would like to see "testability" replaced, and possibly the
word "most" as well, and instead of "testability" (which I am
not quite sure is a real word) instead something like this,
which uses already existing (but currently undefined! :) )
terms:

      Explicitly designate the accessibility compliance
      level of content, and design so that the compliance
      level may be readily verified.

How does that sound?  If we're going to bother to tell people
to be "testable", we might as well go all the way and require
that if you are going to be WCAG 2.0 compliant, you need to
state as such.  This is the same, in my opinion, as requiring
a DOCTYPE at the start of the valid (X)HTML file.

However, I'm also concerned that I don't feel this principle,
while good, rises to the level of a separate guideline.  I think
this sounds more like a checkpoint, and the proposed checkpoints
like techniques.

I'm not sure which guideline this falls under; clearly such a
checkpoint is strongly aligned with other checkpoints which speak
of metadata.

Where is our metadata guideline currently?

>checkpoint x.1
>Specify in machine readable form specifications against which machine verification may be preformed.
>Example: in HTML include the DOCTYPE.

This is a good technique; it would be listed in nearly all (all?)
techniques documents for this checkpoint.  (I don't believe that
CSS has a way to specify the exact form of CSS used.)

>checkpoint x.2
>Avoid use of alternative version of content that requires human effort to verify.
>Example: Avoid when possible manually created images of text; use styled text instead.  Note that automated generation of images of text are allowed per checkpoint x.3

This worries me because of the vagueness; "avoid"..."when possible"
is very much up to interpretation and gives little in the way of
guidance, instead asking the web designer to make her own choice.

I would rather see this as a technique, used when the appropriate
technologies are applied.  In this case, automatic generation of
images is rather difficult for most web designers, but the phrasing
of this question could ban all forms of manual generation, if
adopted as a checkpoint.

I'm also not convinced that automatically generated textual images
_are_ any more or less accessible than those which are not.  In
any case "automatic server-side generation" is one of our technologies
on our list and not something we can assume will be present on any
site.

>checkpoint x.3
>When alternative versions of content are created, create them automatically when possible.
>Example: A program that automatically converts text to images

This again is a specific technique, not a guideline or
checkpoint level principle.  We need to avoid mandating _how_
to solve problems in checkpoints.

>checkpoint x.4
>When alternative content is created manually, make specific correspondence between content and its particular alternative.

Do you mean "alternative content" as separate from "alternative
presentations of the same content"?  Or the same?  I am confused by
the idea of creating content here.

>Negative Example:  A manually created alternative text-only site in which information is distributed differently among ithe pages.  Validating the equivalence of such a manually created site is very labor intensive.

This disturbs me because the labor concerned in validating a web
site should not be a justification for accessibility guidelines --
rather, the effect on accessibility is what matters.  William has
argued that "validating the equivalence" and using a site are the
same function, but I still don't agree with that; I think this
proposed checkpoint (which should be a technique) is not talking
about simply using a site, but something different.

At one point we had talked about making relationships explicit in
the markup or data model; is that what is meant here?  This may be
a technique for another requirement. :)

>Positive Example: Image and ALT text.  It's simple for a tool to present image and ALT tag to user for comparison (e.g. in A-Prompt and in the Wave)
>Positive Example: Content provided by Object tag and nested object tag.
>Positive Example: Two sites created from common data through different transformations, PROVIDED that the transformation rules are publically visible for validation.

I don't believe that the transformation rules need to be publicly
revealed.  I'm not sure why this is necessary.  (Full disclosure:
Reef/Edapta have considerable effort invested in our transformation
process; while I am not certain that they are or will remain
strictly proprietary, I likewise cannot guarantee that all details
of transformation rules will eventually be make public information.)

>checkpoint x.5
>When possible, Use only styles linked consistently to particular semantic objects.
>Example: CSS rule that makes all Headings a particular style.
>Negative Example: CSS rule linked to class.  Current CSS has no way to expose class semantics to user agent, so it takes human judgment to decide if the class is simply decorative, which is harmless, or is carrying information unavailable to user agents.

I don't understand the "when possible" qualifier, because to the
best of my knowledge, the parties who are in favor of this restriction
seem to believe that it is _always_ possible to do this.

I also believe that it will be impossible to eliminate human
judgment from the accessibility evaluation process, so I am not
particularly swayed by arguments which state that using class requires
human intervention.  Many guidelines and checkpoints fall under
this category, and I don't see that as a reason to ban entire
groups of techniques such as using CSS rules linked to class
outright.  That is throwing the baby out with the bathwater simply
for the sake of automated evaluation scripts, and I frankly do not
have as much faith in automatic tools as some people are proposing
we should have.


Summary:

I think the proposed guideline can be rewritten to a checkpoint,
and most of the proposed checkpoints are clearly technology-specific
techniques, not true checkpoints.  A few specific points, usually
made in "examples" or "negative examples" are problematic; I assume
that the original proposal was present as a draft that can be
modified and consensus/compromise achieved, and not as a done deal
which needs to be accepted/rejected as a whole.  There are some good
concepts in there and I believe we need to incorporate those into
WCAG 2.0.

-- 
Kynn Bartlett  <kynn@idyllmtn.com>                    http://kynn.com/
Sr. Engineering Project Leader, Reef-Edapta       http://www.reef.com/
Chief Technologist, Idyll Mountain Internet   http://www.idyllmtn.com/
Contributor, Special Edition Using XHTML     http://kynn.com/+seuxhtml
Unofficial Section 508 Checklist           http://kynn.com/+section508

Received on Friday, 22 December 2000 19:12:48 UTC