Re: WCAG 2.0 automated verification and intended reporting layout

The ERT WG home page is http://www.w3.org/WAI/ER/ . You can find
additional information  there, including contact information.

Regards,
Loretta

On Thu, Oct 23, 2008 at 9:27 PM, Dylan Nicholson
<d.nicholson@hisoftware.com> wrote:
> Automated testing tools are often used on sites upwards of a million pages.  Manual human verification of every page is hence basically impossible.  However, manual verification of, say, pages that contain "audio-only presentations" is more realistic - this assumes that there is an automated method of recording pages that use such presentations.  It would be nice if there was standard XHTML markup to identify presentations as audio-only / video-only / etc.
>
> Is there a separate contact address for the ERT WG?
> ________________________________________
> From: Loretta Guarino Reid [lorettaguarino@google.com]
> Sent: Friday, 24 October 2008 1:08 PM
> To: Dylan Nicholson
> Cc: public-comments-wcag20@w3.org
> Subject: Re: WCAG 2.0 automated verification and intended reporting layout
>
> On Tue, Oct 14, 2008 at 8:17 PM, Dylan Nicholson
> <d.nicholson@hisoftware.com> wrote:
>> Hello,
>>
>> Has anyone thought been given to the intended reporting layout for tools
>> that automatically verify websites for WCAG 2.0 compliance?  As a developer,
>> the logical "testing unit" would seem to be a "technique", while the logical
>> grouping is a "success criterion".  But many techniques are shared across
>> multiple criterion, so it seems that "technique" results would necessarily
>> be shown more than once, e.g..:
>>
>> Success Criteria 1.1.1
>>    H36 - passed
>>    H2 - passed
>>    H37 - passed
>>    ...
>> Success Criteria 2.4.4
>>    ...
>>    H2 - passed
>>    ...
>> Success Criteria 2.4.9
>>    ...
>>    H2 - passed
>>
>> Further, would a comprehensive report be expected to include the "G"
>> techniques, which generally can't be fully automated, but could be listed as
>> advice to the user as to how to check the page, potentially automatically
>> filtering out which pages they are relevant to (e.g., no point showing G94
>> if a page has no non-text content)?
>>
>> Thanks,
>>
>> Dylan
>>
>>
> ================================
> Response from the Working Group
> ================================
> By Success Criterion is how we grouped them in HOW TO MEET WCAG2 and
> we think this is how a tool  would too.
>
> Specific reporting formats is a differentiating feature between
> evaluation tools. There are many ways to present the information to
> the user, some of which are more appropriate for particular contexts
> than others. It is beyond the scope of the WCAG WG to make
> recommendations about this aspect of the evaluation tool's user
> interface and functionality.
>
> With regard to the General techniques (and many of the technology
> specific techniques) it is true that many cannot be automatically
> tested.  As a result they would need human testing.  Any tool should
> both REQUIRE that the human test be conducted and PROVIDE a means to
> record the result.  Further - no tool should pass a page unless the
> human testing was complete.
>
> Requirements that need human testing are just as required as those
> that can be automated.  Because techniques and failures are not
> normative, they should not be considered as advice but rather
> requirements that must be tested for using human testers, and equal to
> those requirements that can be automatically tested.
>
> The Evaluation and Repair Tools Working Group (ERT WG) is working on a
> standardized vocabulary to express test results: Evaluation and Report
> Language (EARL; http://www.w3.org/TR/EARL10-Schema/). This vocabulary
> can express results both from automated testing and from human
> evaluation.
>
> Loretta Guarino Reid, WCAG WG Co-Chair
> Gregg Vanderheiden, WCAG WG Co-Chair
> Michael Cooper, WCAG WG Staff Contact
>
>
> On behalf of the WCAG Working Group
>

Received on Friday, 24 October 2008 05:33:35 UTC