RE: Exploding the myth of automated accessibility checking

I think human judgment will inevitably be necessary for some criteria.
I also think that the more that can be machine tested, the better.  This
is due to the reality, like it or not, that much more accessibility
reviews are likely to be done by automated means than an approach that
requires staff training, skill, and time for case by case inspections.

Jamal



-----Original Message-----
From: David Poehlman [mailto:david.poehlman@handsontechnologeyes.com] 
Sent: Monday, August 08, 2005 9:16 PM
To: Jamal Mazrui
Cc: Wendy Chisholm; Joe Clark; WAI-IG; WAI-GL
Subject: Re: Exploding the myth of automated accessibility checking


There is a mind set that would limit the scope of wcag 2.0 to only  
those factors which can be tested by automation.  I would not welcome  
this change because you could then even more than now have a site  
that past but was utterly useless.

-- 
Jonnie Apple Seed
With His:
Hands-On Technolog(eye)s



On Aug 8, 2005, at 4:28 PM, Jamal Mazrui wrote:


I think the goal of automated testing, as much as possible, is an
important one.  To me, these results indicate that the success criteria
and testing tools need to be improved, rather than the goal discounted.

Regards,
Jamal



-----Original Message-----
From: w3c-wai-ig-request@w3.org [mailto:w3c-wai-ig-request@w3.org] On
Behalf Of Wendy Chisholm
Sent: Monday, August 08, 2005 3:35 PM
To: Joe Clark; WAI-IG; WAI-GL
Subject: Re: Exploding the myth of automated accessibility checking



At 03:13 PM 8/8/2005, Joe Clark wrote:


> National treasure Gez Lemon wrote a test page with known validation  
> and
>


> WCAG errors and ran it through various automated checking tools, none
>
of

> which caught more than a few of the errors, if that.
>
> <http://juicystudio.com/article/invalid-content-accessibility- 
> validator
>
s.php>

Excellent.  This is an important point for people to understand. I
evaluated a Web site last week that had 8 major accessibility issues but

the evaluation tools only found 1 or 2 (depending on the tool).


> It's quite a devastating analysis and calls into question the WCAG
>
Working

> Group's interest in making as many guidelines as possible
>
machine-checkable.

When the WCAG WG talks about testability, our primary goal is to provide

enough information so that people who evaluate or create Web content can

make a good decision.  In WCAG 1.0, some of the checkpoints are
ambiguous,
so we're trying to fix that in WCAG 2.0 by providing as much testable
information as possible.  By specifying that success criteria must be
"testable" we are not saying that success criteria are machine
automatable.
We are saying that a person should be able to determine if they have
satisfied a given criteria. Most of the "tests" (in the test suite) are
procedures for humans to follow, not algorithms for tools. If tests are
automatable, that's great, however I don't think anyone expects that all

tests (or even a majority) will be fully automated.  Shawn Henry  wrote
a
great piece about this a while ago "Web Accessibility Evaluation Tools
Need
People" [1].

The 30 June 2005 Working Draft of WCAG 2.0 says, "The Working Group
believes that all success criteria should be testable. Tests can be done
by
computer programs or by people who understand this document. When
multiple
people who understand WCAG 2.0 test the same content using the same
success
criteria, the same results should be obtained."  This could probably use

some work, but I hope that it's clear that we understand humans are part
of
the evaluation process and that our primary goal is to provide
unambiguous
success criteria.

Best,
--wendy

[1] <http://uiaccess.com/evaltools.html>

Received on Tuesday, 9 August 2005 12:55:47 UTC