Re: Comments WAET

Here are my comments. I also like your comments Annika and Wilco about aggregation (unique errors versus repeated errors) and combining results. As I will go suddenly on a short holiday, I'm not able to join the meeting later today.

Hanno

- COMMENT 1 -
2.1.9 
additional issues:
Endless/infinite loops is . For example: endless calendars with an infinite amount of pages of every day in the future.

- COMMENT 2 -

"Capabilities related to features described in previous sections like: content negotiation, authentication support or session tracking."
features for 'dynamic content' should be added on this list as this is an important one for some success criteria

- COMMENT 3 - 
2.2.2 Test modes: automatic, semiautomatic and manual
"Some tools do not declare that they only perform automatic testing. Since it is a known fact that automatic tests only cover a small set of accessibility issues, full accessibility conformance can only be ensured by supporting developers and accessibility experts while testing in manual and semiautomatic mode."
At the other hand are tools that don't declare they are not doing full-automatic testing, but only giving feedback for manual evaluation not very helpful in some cases. These tools give a lot of possible errors, but can't detect failures. It would be gret tht a tool declares which and/or how many tests are done automatically, and how many tests are automatic.

- COMMENT 4 -
Also, note that it is easier to automatic detect the 'WCAG2 Failures', but it's harder to automatically evaluate the 'WCAG2 Sufficient techniques'. For example, it is easy to detect an ALT placeholder text (F30), but it is harder to detect  G94 (descriptive alternative text). 
So: detecting success criteria on failures can be done automatic (returning a EARL:FAIL), detecting semantic correct and descriptive use of techniques needs manual testing (if done, returning a EARL:PASS, if not manually tested, a EARL:CANTTELL).

- COMMENT 5 -
2.4.2 Localization and internationalization
Besides the UI, some tests also need internationalisation. For example placeholders should be tested in the local language.

- COMMENT 6 - 
3 Examples of evaluation tools

We can add a wysiwyg-plugin:
3.3 Example Tool D: Accessibility tool for richt text editors

Tool D is an accessibility evaluation tool for rich text editors. The tool supports content authors when writing content for web pages.

The tool lives in the rich text editor with a button
The tool takes into account the output of the processed html of the richt text editor (page language etc)
The tool might give suggestions as you type hand additional hints.

- COMMENT 7 -
3.4 Side-by-Side Comparison of the Example Tools

tool B "Dynamic content":'yes' is a bit optimistic. With dynamic content we have: search pages, single page forms, multipage forms, filters, video content, all kind of buttons with open and close, fullscreen, interactive tools etc. I cant imagine a situation a tool can do all, so this category should be more specific. For example: testing of single page forms, analysing search results, speech analysis etc

- COMMENT 8 -
"Test modes: automatic, semiautomatic and manual" : this needs specification, as for the WCAG failures, how many are automatic tested, how many Techniques are automatic tested. And how many tests need the semi-automatic mode. 


Op 7 aug. 2014, om 11:41 heeft Annika Nietzio <an@ftb-volmarstein.de> het volgende geschreven:

> Hi Wilco, hi all,
> 
> here are my thoughts on the WAET - to be discussed in the meeting this
> afternoon.
> 
> Kind regards
> Annika
> 
> 
> == Review of http://www.w3.org/TR/2014/WD-WAET-20140724/ ==
> 
> Abstract:
> "Features to specify and manage (...) web accessibility evaluations".
> The aspect of managing web accessbility evaluation is not taken up in the features. "2.4.1 Workflow integration" focuses mainly on developers. But the person responsible for managing the process (e.g. of creating a new web site) is usually not the developer.
> 
> 2.1.1 Content types
> "From the accessibility standpoint, the evaluation of these resources is relevant for issues like colour contrast, colour blindness or media alternatives, for instance." Resources can't be colour blind. Suggestion: "colour differentiation" or "distinguishability"?
> 
> 2.2 Testing functionality
> Suggested feature: Support users in manual testing by emulating assistive technologies (such as screen readers).
> 
> 2.3 Reporting and monitoring
> For users wanting to import/export/compare testing results, the major challenge ist to align the test results from different sources. This is related to "2.3.3 Import/export functionality" and "2.3.5 Results aggregation" but could also be added as a new feature.
> Suggested new feature: The results use a common way of identifying the accessiblity problems that are reported. This could be WCAG 2.0 Techniques or Success Criteria.
> 
> 
> 
> 
> Am 06.08.2014 um 16:14 schrieb Wilco Fiers:
>> Dear ERT,
>> 
>> I just wanted to say I think you all did a great job on the WAET. I've written up a few thoughts I had while reviewing the  public draft. I've asked the members of the auto-wcag community group to see if they can review the document as well. Hope this is of some help for you all! Looking forward to see how the document will develop further.
>> 
>> Regards,
>> 
>> Wilco Fiers
>> Accessibility Foundation
>> 
>> 
>> - COMMENT 1 -
>> 2.1.1. Content types : This confused me a bit, because of the word 'content'. In WCAG the word 'content' means something different then it does in HTTP. I think for WCAG what is called 'content' here is actualy 'technologies'. Maybe something like "Processed technologies" is clearer, as the main question here seems to be, does the tool look at just the HTML, or does it take CSS, Javascript, etc. into account?
>> 
>> - COMMENT 2 -
>> A feature I miss that relates to automated tools is reliability benchmarking. There are big differences between the reliability of different automated tools. Knowing how many tests a tool has and how reliable their findings are can be important. When you use a tool that monitors large numbers of web pages it is more important that a tool provides reliable results. But when you are developing a website it is important that a tool gives you as many potential issues as it can find and let the developer figure out what are real issues and what are false positives.
>> 
>> - COMMENT 3 -
>> 2.4.1 Workflow integration mentions bug tracking. I would like this to be a little more extensive. For instance, are there protocols that bug/issue trackers use that are recommended? How should you ensure that the same issue get's logged multiple times, either because it comes from a second evaluation or because it's an issue that is in a templated and so it repeats on many pages?
>> 
> 
> 

Hanno Lans www.datascape.nl
Middenweg 73, 2024 XA Haarlem
06-26076205 | hanno@datascape.nl | @hannolans 

Received on Friday, 8 August 2014 13:08:44 UTC