RE: WCAG 2.0 automated verification and intended reporting layout

One other comment:

It seems many of the Fxx techniques are suitable for automation.

I notice  that F30 is listed as a technique for both 1.1.1 and 1.2.1.  However 1.2.1 only applies to audio-only and video-only prerecorded media.  So F30 for 1.2.1 is actually specifically:

* Check the text alternative for each audio-only and video-only prerecorded media presentation to see if it not actually a text alternative for the non-text content

But my question would be why isn't this already covered by 1.1.1 and pertains to ALL non-text content?

1.1.1 specifies that for time-based media, that "text alternatives at least provide descriptive identification of the content" - so if the text alternative for any time-based media can be detected to contain just the file name or placeholder text (e.g. just the words "video presentation") as part of testing 1.1.1, then it has already failed Level A compliance, and there seems little point doing the same test as part of 1.2.1.

The only separate "technique" I could think that would be useful for 1.2.1 - where the requirement is to provide a full transcript rather than a "descriptive idenfication" is some sort of test based on the length of the alternative text - i.e. a video presentation that only has a 40 character-long alternative text may well pass 1.1.1 but then fail 1.2.1 as 40 characters almost certainly isn't sufficient for a full transcript.

Actually I will say what surprises me is that both 1.1.1 and 1.2.1 are "Level A".

It seems to me the logical requirement heirarchy would be:

To achieve Level A compliance, all time-based media must "at least provide descriptive identification of the content", as per guideline 1.1

To achieve Level AA compliance, all time-based media must include a full textual transcript of the content, as per guideline 1.2

However given both are level A, it's not clear why 1.1 specifies anything at all wrt to time-based media, unless:

a) The 1.1 time-based media requirements only apply to presentation types that don't fit any of the categories that 1.2 covers - but what's an example of such a type?

b) Time-based media is actually expected to have BOTH types of text alternative: one that provides "a descriptive identification", and one that provides a full transcript.

Otherwise if 1.2 covers the full requirements for all time-based media then why does 1.1 have anything to say about it at all?

Would appreciate any clarification in this area!

Thanks

Dylan

________________________________________
From: Loretta Guarino Reid [lorettaguarino@google.com]
Sent: Friday, 24 October 2008 1:08 PM
To: Dylan Nicholson
Cc: public-comments-wcag20@w3.org
Subject: Re: WCAG 2.0 automated verification and intended reporting layout

On Tue, Oct 14, 2008 at 8:17 PM, Dylan Nicholson
<d.nicholson@hisoftware.com> wrote:
> Hello,
>
> Has anyone thought been given to the intended reporting layout for tools
> that automatically verify websites for WCAG 2.0 compliance?  As a developer,
> the logical "testing unit" would seem to be a "technique", while the logical
> grouping is a "success criterion".  But many techniques are shared across
> multiple criterion, so it seems that "technique" results would necessarily
> be shown more than once, e.g..:
>
> Success Criteria 1.1.1
>    H36 - passed
>    H2 - passed
>    H37 - passed
>    ...
> Success Criteria 2.4.4
>    ...
>    H2 - passed
>    ...
> Success Criteria 2.4.9
>    ...
>    H2 - passed
>
> Further, would a comprehensive report be expected to include the "G"
> techniques, which generally can't be fully automated, but could be listed as
> advice to the user as to how to check the page, potentially automatically
> filtering out which pages they are relevant to (e.g., no point showing G94
> if a page has no non-text content)?
>
> Thanks,
>
> Dylan
>
>
================================
Response from the Working Group
================================
By Success Criterion is how we grouped them in HOW TO MEET WCAG2 and
we think this is how a tool  would too.

Specific reporting formats is a differentiating feature between
evaluation tools. There are many ways to present the information to
the user, some of which are more appropriate for particular contexts
than others. It is beyond the scope of the WCAG WG to make
recommendations about this aspect of the evaluation tool's user
interface and functionality.

With regard to the General techniques (and many of the technology
specific techniques) it is true that many cannot be automatically
tested.  As a result they would need human testing.  Any tool should
both REQUIRE that the human test be conducted and PROVIDE a means to
record the result.  Further - no tool should pass a page unless the
human testing was complete.

Requirements that need human testing are just as required as those
that can be automated.  Because techniques and failures are not
normative, they should not be considered as advice but rather
requirements that must be tested for using human testers, and equal to
those requirements that can be automatically tested.

The Evaluation and Repair Tools Working Group (ERT WG) is working on a
standardized vocabulary to express test results: Evaluation and Report
Language (EARL; http://www.w3.org/TR/EARL10-Schema/). This vocabulary
can express results both from automated testing and from human
evaluation.

Loretta Guarino Reid, WCAG WG Co-Chair
Gregg Vanderheiden, WCAG WG Co-Chair
Michael Cooper, WCAG WG Staff Contact


On behalf of the WCAG Working Group

Received on Friday, 24 October 2008 04:55:57 UTC