Re: Comments WAET

Hi Hanno,

we discussed the WAET comments in the meeting today and concluded that 
we will *not* send the feedback as a group. Each member should send 
their own comments.

We had a look at your comments and have added some 
questions/clarifications (see below) which you might want to take into 
account before sending your comments to the ERT.

By the way, your message didn't arrive on the auto-wcag mailing list. 
Maybe because of an attachment? Keep in mind that the W3C mailing list 
rules are rather strict.


Am 07.08.2014 um 14:58 schrieb Hanno Lans:
> Here are my comments. I also like your comments Annika and Wilco about
> aggregation (unique errors versus repeated errors) and combining
> results. As I will go suddenly on a short holiday, I'm not able to join
> the meeting later today.
>
> Hanno
>
> - COMMENT 1 -
> 2.1.9
> additional issues:
> Endless/infinite loops is . For example: endless calendars with an
> infinite amount of pages of every day in the future.

What do you mean?
Avoiding infinite loops is certainly an important requirement for a 
crawler but it is not WAET specific.

> - COMMENT 2 -
>
> "Capabilities related to features described in previous sections like:
> content negotiation, authentication support or session tracking."
> features for 'dynamic content' should be added on this list as this is
> an important one for some success criteria

In WAET "dynamic content" refers to user interactions. How can that be 
included into crawler? And why would this be desirable?

> - COMMENT 3 -
> 2.2.2 Test modes: automatic, semiautomatic and manual
> "Some tools do not declare that they only perform automatic testing.
> Since it is a known fact that automatic tests only cover a small set of
> accessibility issues, full accessibility conformance can only be ensured
> by supporting developers and accessibility experts while testing in
> manual and semiautomatic mode."
> At the other hand are tools that don't declare they are not doing
> full-automatic testing, but only giving feedback for manual evaluation
> not very helpful in some cases. These tools give a lot of possible
> errors, but can't detect failures. It would be gret tht a tool declares
> which and/or how many tests are done automatically, and how many tests
> are automatic.

Something is wrong with the last sentence "how many tests are done 
automatically, and how many tests are automatic." ???

> - COMMENT 4 -
> Also, note that it is easier to automatic detect the 'WCAG2 Failures',
> but it's harder to automatically evaluate the 'WCAG2 Sufficient
> techniques'. For example, it is easy to detect an ALT placeholder text
> (F30), but it is harder to detect  G94 (descriptive alternative text).
> So: detecting success criteria on failures can be done automatic
> (returning a EARL:FAIL), detecting semantic correct and descriptive use
> of techniques needs manual testing (if done, returning a EARL:PASS, if
> not manually tested, a EARL:CANTTELL).

Which section of WAET are you refering to?

> - COMMENT 5 -
> 2.4.2 Localization and internationalization
> Besides the UI, some tests also need internationalisation. For example
> placeholders should be tested in the local language.

This is covered in WAET already: "It must be considered as well that 
some accessibility tests need to be customized to other languages, like 
for instance, such as those related to readability."

> - COMMENT 6 -
> *3 Examples of evaluation tools*
>
> We can add a wysiwyg-plugin:
>
>
>       3.3 Example Tool D: Accessibility tool for richt text editors
>
> Tool D is an accessibility evaluation tool for rich text editors. The
> tool supports content authors when writing content for web pages.
>
>   * The tool lives in the rich text editor with a button
>   * The tool takes into account the output of the processed html of the
>     richt text editor (page language etc)
>   * The tool might give suggestions as you type hand additional hints.
>
>
> - COMMENT 7 -
>
>
>       3.4 Side-by-Side Comparison of the Example Tools
>
> tool B "Dynamic content":'yes' is a bit optimistic. With dynamic content
> we have: search pages, single page forms, multipage forms, filters,
> video content, all kind of buttons with open and close, fullscreen,
> interactive tools etc. I cant imagine a situation a tool can do all, so
> this category should be more specific. For example: testing of single
> page forms, analysing search results, speech analysis etc
>
> - COMMENT 8 -
> "Test modes: automatic, semiautomatic and manual" : this needs
> specification, as for the WCAG failures, how many are automatic tested,
> how many Techniques are automatic tested. And how many tests need the
> semi-automatic mode.

EARL defines the test modes already. What kind specification are you 
suggesting in addition?


Kind regards & enjoy your holiday
Annika

Received on Thursday, 7 August 2014 15:26:26 UTC