- From: Boland Jr, Frederick E. <frederick.boland@nist.gov>
- Date: Fri, 19 Aug 2011 13:50:53 -0400
- To: Detlev Fischer <fischer@dias.de>
- CC: 'WCAG' <w3c-wai-gl@w3.org>
Thanks for your insightful comments. I think they are worthy of serious consideration.
My thoughts as you suggest were just as an input or starting point to further discussion
on this topic. Perhaps as part of the work of the EVAL TF we can come up with principles
or characteristics of how an evaluation should be performed..
Thanks and best wishes
Tim Boland NIST
PS - is it OK to post this discussion to the EVAL TF mailing list (it might be useful
information for the members of the TF)?
-----Original Message-----
From: w3c-wai-gl-request@w3.org [mailto:w3c-wai-gl-request@w3.org] On Behalf Of Detlev Fischer
Sent: Friday, August 19, 2011 12:14 PM
To: w3c-wai-gl@w3.org
Subject: Re: some comments/questions on techniques instructions document for submitters
Hi Tim Borland,
EVAL TF has just started so I went back to the level of atomic tests to
see what their role might be in a practical accessibility evaluation
approach.
Atomic tests limited to a specific technique are certainly useful as a
heuristic for implementers of such a technique to check whether they
have implemented it correctly, and the points in the techniques
instructions as well as your points on writing a 'good test' are
therefore certainly valid on this level.
However, any evaluation procedure checking conformance of content to
particular SC criteria needs to consider quite a number of techniques in
conjunction. The 'complication' you mention can be avoided on the level
of technique, not any longer on the level of SC.
Stating conformance to a particular SC might involve a large number of
techniques and failures, some applied alternatively, others in
conjunction. For example, checking for compliance of all page content to
SC 1.1.1 (Non-Text Content), any of the following 15 techniques and
failures might be relevant: G95, G94, G100, G92, G74, G73, G196, H37,
H67, H45, F67, F3, F20, F39, F65. And this does not even include the
techniques which provide accessible text replacements for background images.
My belief is that in *practical terms*, concatenating a large number of
partly interrelated atomic tests to arrive at a SC conformance judgement
is just not a practical approach for human evaluation. If we want a
*usable*, i.e., manageable procdure for a human tester to check whether
the images on a page have proper alternative text, what *actually*
happens is more something like a pattern matching of known (recogniszed)
failures:
* Display all images together with alt text (and, where available, href)
* Scan for instances of known failures - this also needs
checking the image context for cases like G74 and G196
* Render page with custom colours (images now disappear) and check
whether text replacements for background images are displayed
Moreover, if the *severity* of failure needs to be reflected in the
conformance claim or associated tolerance metrics, then the failure to
provide alt text for a main navigation item or graphical submit button
must not be treated the same way as the failure to provide alt on some
supporter's logo in the footer of the page.
My point is that while I am all for precision, the requirements for a
rather complex integrated human assessment of a multitude of techniques
and failures practically rule out an atomic approach where each
applicable test of each applicable technique is carried out sequentially
along the steps provided and then processed according to the logical
concatenation of techniques given in the "How to meet" document. It
simpy would be far too cumbersome.
I realise that you have not maintained that evaluation should be done
that way - I just took your thoughts as a starting point. We have only
just started with the EVAL task force work - I am curious what solutions
we will arrive at to ensure rigor and mappability while still coming up
with a manageable, doable approach.
Regards,
Detlev
Am 05.08.2011 16:28, schrieb Boland Jr, Frederick E.:
> For
>
> http://www.w3.org/WAI/GL/wiki/Technique_Instructions
>
> General Comments:
>
> Under "Tests" should there be guidance on limiting the number of steps
> in a testing procedure (not making tests too involved)?
>
> (this gets to "what makes a good test"?
>
> In .. http://www.w3.org/QA/WG/2005/01/test-faq#good
>
> "A good test is:
>
> * Mappable to the specification (you must know what portion of the
> specification it tests)
> * Atomic (tests a single feature rather than multiple features)
> * Self-documenting (explains what it is testing and what output it
> expects)
> * Focused on the technology under test rather than on ancillary
> technologies
> * Correct "
>
> Does the information under "Tests" clearly convey information in these
> items to potential submitters?
>
> Furthermore, do we want to have some language somewhere in the
> instructions that submitted techniques should not be too "complicated"
> (should just demonstrate simple features or atomic actions if possible)?
>
> Editorial Comments:
>
> under "Techniques Writeup Checklist "UW2" should be expanded to
> "Understanding WCAG2"
>
> 3^rd bullet under "applicability" has lots of typos..
>
> Thanks and best wishes
>
> Tim Boland NIST
>
--
---------------------------------------------------------------
Detlev Fischer PhD
DIAS GmbH - Daten, Informationssysteme und Analysen im Sozialen
Geschäftsführung: Thomas Lilienthal, Michael Zapp
Telefon: +49-40-43 18 75-25
Mobile: +49-157 7-170 73 84
Fax: +49-40-43 18 75-19
E-Mail: fischer@dias.de
Anschrift: Schulterblatt 36, D-20357 Hamburg
Amtsgericht Hamburg HRB 58 167
Geschäftsführer: Thomas Lilienthal, Michael Zapp
---------------------------------------------------------------
Received on Friday, 19 August 2011 17:51:18 UTC