Re: [WebAIM] Best automated Accessibility evaluation tool

On Fri, 18 Nov 2005 14:53:18 +0100, Jim Tobias <tobias@inclusive.com>  
wrote:

> Is there a "best tool overall", or rather a (small) set of tools, each of
> which is best at one part of automated testing?  I'm assuming the latter.

I think there are different tools that are good at different testing  
tasks. Amongst them, I like Hera's guidance for manual testing, its EARL  
reporting, its simple support of collaborative testing, its translation  
interface that makes it easy to adapt to your language, its reporting  
style, and its philosophy. Although as one of the people involved in its  
development, that is hardly surprising.

I like AccVerify for its raw power - in dealing with a large-scale site  
professionally I wouldn't want to be without it - the full on paid  
version, for professional work.

I like the Wave for its quick overview - as an 'expert' trying to get a  
handle on what a site is like, the Wave is often the first tool I reach  
for. (Hera or AccVerify are next).

> Their other criteria are reliability and ease of use.  Their goal is to
> split web accessibility into two parts: automated testing performed by
> relatively untrained testing technicians, and complex testing performed  
> by highly trained usability/accessibility engineers.  I can't fault  
> their goal; can you?

Nope. The idea of making a McDonalds industry out of basic testing, bubt  
actually using real expertise for the stuff that is difficult makes good  
economic sense.

> Given WAI's defensible "no recommendations" policy, there seems to be no
> coordinated public source for good general guidance on automated tools.

Well, any recommendations from WAI would in fact be indefensible. However,  
if they had the resources avalable (they are looking for a technical  
person just to maintain working groups, and I encourage anyone who wants  
to work there to apply, since I am suffering from working groups not  
having any support) they could do more testing, and publishing of results.

> At the least, there *should* be an agreed-upon list of which guidelines  
> can be robo-tested, and which tools perform satisfactorily.  I can't  
> find such a
> resource; am I missing something?

I worked on this in EuroAccessibility - unfortnately it seemed that many  
of the other members were more interested in securing funding for  
themselves than in doing any real work, so the Task Force was closed.

But have a look at  
http://www.euroaccessibility.org/tf2_doc/method/evalmeth.html - the draft  
we got to, with rreferences to Giorgio's work that was an important basis.

It turns out to be very difficult to get agreement on which tests can be  
reliably performed automatically - there are disagreements on the finer  
points of testing, not everyone is prepared to release their testing  
algorithms (Hera and aChecker are open source, so you can look at the  
tests they apply), and making claims about the technology people can or  
cannot develop is in my experience a fairly difficult thing to do.

Giorgio Brajnik (you should learn his name if you don't know it - he is  
involved in LIFT, a repair tool that in my opinion shows up all the others  
for usability, although it isn't my first choice for its testing  
algorithms) has done some interesting work on methodologies for comparing  
tool results. If there is interest in putting some resources (effort) into  
benchmarking tools, I will try to resurrect the EuroAccessibility Task  
Force in the new year or make something similar. It is certainly true that  
this information would be helpful. It seems that the people running off  
with the funding grants are generally not doing anything much about  
providing it :-(

cheers

Chaals

-- 
Charles McCathieNevile                      Fundacion Sidar
charles@sidar.org   +61 409 134 136    http://www.sidar.org

Received on Saturday, 19 November 2005 15:45:58 UTC