comments on draft WAET

Here are the comments on WAET, based on our experience with the tool Just Accessible of the Dutch Government. In general the document outlines the different aspects well.
I have some comments written below.

With kind regards,

Hanno Lans

2.1 Retrieving and rendering web content
 
- COMMENT 1 - 2.1.9 
An important additional  feature for a web crawler is handling endless loops. With the Just Accessible toolwe decided to crawl every page, but some type of pages exist infinitely. Besides performance it creates problems when calculating the amount of failures. For example calendars with an infinite amount of pages of every future day. A crawlers could have logics for this.

- COMMENT 2 - 2.1.9

I miss the feature of sampling. In Just Accessible we created a separate sampling mechanism as it takes to much processing capacity to test millions of pages. But we first have to crawl them all because taking the first 400 pages didnt gave us a useful selection. There are several sampling parameters possible random, hierarchy, user statistics, content date, content type.

- COMMENT 3 - 2.1.9

Most crawlers use only static sources (html and pdf) to extract urls. More correct would be to crawl rendered DOM instead, but that comes with a huge performance loss. But this is an interesting feature whether or not to use the DOM for crawling

- COMMENT 4 - 2.1.9

In Just Accessible we split the crawling mechanism and the testing mechanism. So in fact, a page is first crawled for the URLS, and after the sampling mechanism tested on WCAG2 in an headless browser. I think this can be mentioned more explicitly that the crawler is for the list of URL's, and testing is done in another stage.

- COMMENT 5 - 2.1

In Just Accessible we split the crawling mechanism and the testing mechanism. So in fact, a page is first crawled for the URLS, and after the sampling mechanism tested on WCAG2 in an emulated webbrowser (headless browser). At the moment I don't see the mechanism mentioned for the kind of test environment the tool has. There are possibly four test environments:
1. downloading the raw resources and analyse those files.
2. render the URL in an headless browser and analyse the DOM
3. render the URL in a real browser and analyse the DOM
4. render the URL in a set of browsers (for example a mobile emulator) and analyse the DOM's
In Just Accessible we choose the third option and are using phantomjs with webkit. But this has drawbacks as the HTML5 media API is not working


2.2 Testing functionality

- COMMENT 6 -  2.2.2 
"Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode."
At the other hand tools exists that are not doing full-automatic testing, but only giving feedback for manual evaluation (like inline suggestions). These tools give a lot of possible errors, but are not good in detecting failures. It would be great that a user knows how many tests are done automatically, and how many tests are 'selectors' for detecting possible errors. 

- COMMENT 7 -
As WCAG2 consists of Failures and Techniques, a distinction between both might be essential to compare tools. With Just Accessible we started focusing on detection of WCAG2 techniques ('make sure that at least one of the following techniques is used..') and not the WCAG2 failures. Not sure if at the end this approach is better than focusing on the failures.

- COMMENT 8 - 2.3.5
Result aggregation is also influenced by the target audience of the results. Aiming for policy makers, you would like to have an aggregated score, for developers you might need an aggregated list of structural (repeated) errors, and for editors you might need a aggregated list of unique errors.

- COMMENT 9 - 2.3.5
Result aggregation is also possible in time, to track progress. In Just Accessible we do quarterly audits and compare the results. If errors have unique identifiers/pointers a tool could even provide a list of new, open and resolved errors. This make it possible to have progress scores.

3 Examples of evaluation tools

- COMMENT 10 - 

We can add a wysiwyg-plugin:
3.3 Example Tool D: Accessibility tool for richt text editors

Tool D is an accessibility evaluation tool for rich text editors. The tool supports content authors when writing content for web pages.

The tool lives in the rich text editor with a button
The tool takes into account the output of the processed html of the richt text editor (page language etc)
The tool might give suggestions as you type hand additional hints.

- COMMENT 11 -
3.4 Side-by-Side Comparison of the Example Tools

"Dynamic content":'yes' could be more descriptive. With dynamic content we could have search pages, single page forms, multipage forms, filters, video content, all kind of buttons with open and close, fullscreen, interactive tools, a video playing with links etc. I cant imagine a situation a tool can do all, so this category should be more specific. For example: testing of single page forms, analysing search results, speech analysis etc

- COMMENT 12 -
"Test modes: automatic, semiautomatic and manual" : The quantity for each is relevant: how many techniques or failures are fully automatic tested, how many tests have a semi-automatic mode and how many tests are covered in the manual mode? It's very hard to compare tools at the moment as some claim that a technique or failure is tested, but in fact only a few aspects are implemented, or there is only a selector added.
When we had to make the decision which tool to use, we made Excel sheets with all the accessibility tools on the market and their claims on which WCAG2 techniques are covered. Comparison was hard as tools aren't explicit if a technique or failure is fully implemented.

Received on Friday, 15 August 2014 00:25:30 UTC