Do Accessibility Checkers have a place in QA? [via Automated WCAG Monitoring Community Group]

There are many great tools on the market that can check the accessibility of web
pages. The Web Accessibility Evaluation Tools List is a great resource to find
checkers for different types of content. Many of them focus on testing specific
aspects of accessibility, such as color contrast or parsing. But some have a
broader scope and will check many different aspects  and report the conformance
to WCAG success criteria.

I encourage web professionals to use an accessibility checker in their daily
work. But as an accessibility auditor with 8 years of experience, I must confess
that I don't use any of these checkers myself. To test HTML pages, the only
tools I use are a DOM inspector, a color analyzer and a validator. So why the
difference?
Test Accuracy
Automated accessibility testing is tricky. WCAG was never designed to be
automated. There is a good argument to be made that by definition, automated
testing of accessibility is impossible. Think about it. If you want to test if
some piece of content is accessible, you should compare the existing
implementation to what the component should be like when it is accessible. To
automate this, you need two things. You need to automatically determine what an
accessible version would be like and you need some way to compare it to the
current situation.

The first part of this is important. Imagine a tool that could reliably
determine what the text alternative of an image should be. We could compare that
to the actual alternative and we would have our test, right? However, if there
was such a tool, assistive technologies could also implement it. And if they
did, we wouldn't have an accessibility problem with text alternatives anymore.

This idea seems to be true for most accessibility problems: If we can
automatically determine the solution, the problem goes away. Because of this,
accessibility checkers are mostly unable to determine if a success criteria was
met, except where no assistive technologies are involved. But what our tools
certainly can do, is to look for symptoms of accessibility barriers and fail a
success criteria based on those.
Symptoms Of Inaccessibility
If you've done anything with HTML in the past 10 years, you probably know that
you shouldn’t  use the  element. It is an outdated solution to styling text.
There is nothing inherently wrong about the  element, but many accessibility
checkers flag the  element as an error. Why do that for an element that is not
inherently inaccessible?

One way you could use the  element to create an accessibility problem is the
following:

Pick a color:
A
B

Here the font element is used to provide information, which is not available in
text. This is a failure for criterion 1.4.1. A checker that fails the criterion
for using the  element would be correct in doing so in this situation. It
assumes the  element is often used to provide information that is not otherwise
available and fails the criterion based on that assumption.

Assumptions are the basis of automated accessibility tests. Checkers look for
symptoms of accessibility barriers, such as the use of a  element, and assume
they found a barrier. Every automated test I know of works on assumptions in one
way or another. Even a test such as color contrast assumes there is no
conforming alternative version. The important question then becomes: how
accurate are these assumptions?
Dealing With Assumptions
Most tests in tools are based on the test designer's experience with front end
development practices. This experience greatly influences the accuracy of an
accessibility checker. The required accuracy of the tests depends quite a lot on
who is using the tool. For me, as an external accessibility auditor, I need a
very high degree of accuracy. Double checking the results of a checker takes a
lot of time, often more then it would to do the test manually. Therefore I tend
not to use these tools.

For web developers and QA teams accuracy is less of an issue. It may be okay to
flag  elements as errors, as using them is not a good idea anyway. Similarly,
you could fail  elements without  elements, or  elements with an onchange=””
attribute. There are many tests that checkers can use, that can be very
meaningful to your organisation, without them always accurately identifying
accessibility errors.
Conclusion
Accessibility checker tools are great! They provide a quick and relatively
inexpensive way to find accessibility barriers on your website. They are useful
during the development to encourage a style of coding that avoids accessibility
barriers. They also provide a good starting point for anyone who wants to build
accessibility into their quality assurance process, though they don’t only
give you the whole picture.

Accessibility checkers have limitations. Being aware of those means you can make
better decisions about the tools you can use. The field of web accessibility has
long been focused on manual audits, but there is a clear precedence for the use
of tools. As long as we understand their limitations, we can manage them and get
better and more efficient because of them.
About The Author
Wilco Fiers is a web accessibility consultant and auditor at Accessibility
Foundation NL. He is founder and chair of the Auto-WCAG community group. Wilco
has participated in a variety of accessibility projects such as WAI-AGE, WAI-ACT
and EIII as well as being a developer in open source projects such as QuailJS
and WCAG-EM Report Tool.



----------

This post sent on Automated WCAG Monitoring Community Group



'Do Accessibility Checkers have a place in QA?'

https://www.w3.org/community/auto-wcag/2015/04/23/accessibility-checkers-in-qa/



Learn more about the Automated WCAG Monitoring Community Group: 

https://www.w3.org/community/auto-wcag

Received on Thursday, 23 April 2015 09:57:03 UTC