Re: some comments/questions on techniques instructions document for submitters

Dear Tim, Detlev,

On 19.8.2011 19:50, Boland Jr, Frederick E. wrote:
> Thanks for your insightful comments.  I think they are worthy of serious consideration.
> My thoughts as you suggest were just as an input or starting point to further discussion
> on this topic.  Perhaps as part of the work of the EVAL TF we can come up with principles
> or characteristics of how an evaluation should be performed..

Yes, I agree that this is a useful discussion to have in Eval TF, and 
bring back consolidated suggestions to WCAG WG.


> Thanks and best wishes
> Tim Boland NIST
>
> PS - is it OK to post this discussion to the EVAL TF mailing list (it might be useful
>   information for the members of the TF)?

Yes it is. I have CC'ed Eval TF.

Best,
   Shadi


> -----Original Message-----
> From: w3c-wai-gl-request@w3.org [mailto:w3c-wai-gl-request@w3.org] On Behalf Of Detlev Fischer
> Sent: Friday, August 19, 2011 12:14 PM
> To: w3c-wai-gl@w3.org
> Subject: Re: some comments/questions on techniques instructions document for submitters
>
> Hi Tim Borland,
>
> EVAL TF has just started so I went back to the level of atomic tests to
> see what their role might be in a practical accessibility evaluation
> approach.
>
>    Atomic tests limited to a specific technique are certainly useful as a
> heuristic for implementers of such a technique to check whether they
> have implemented it correctly, and the points in the techniques
> instructions as well as your points on writing a 'good test' are
> therefore certainly valid on this level.
>
> However, any evaluation procedure checking conformance of content to
> particular SC criteria needs to consider quite a number of techniques in
> conjunction. The 'complication' you mention can be avoided on the level
> of technique, not any longer on the level of SC.
>
> Stating conformance to a particular SC  might involve a large number of
> techniques and failures, some applied alternatively, others in
> conjunction. For example, checking for compliance of all page content to
> SC 1.1.1 (Non-Text Content), any of the following 15 techniques and
> failures might be relevant: G95, G94, G100, G92, G74, G73, G196, H37,
> H67, H45, F67, F3, F20, F39, F65. And this does not even include the
> techniques which provide accessible text replacements for background images.
>
> My belief is that in *practical terms*, concatenating a large number of
> partly interrelated atomic tests to arrive at a SC conformance judgement
> is just not a practical approach for human evaluation. If we want a
> *usable*, i.e., manageable procdure for a human tester to check whether
> the images on a page have proper alternative text, what *actually*
> happens is more something like a pattern matching of known (recogniszed)
> failures:
>
> * Display all images together with alt text (and, where available, href)
> * Scan for instances of known failures - this also needs
>     checking the image context for cases like G74 and G196
> * Render page with custom colours (images now disappear) and check
>     whether text replacements for background images are displayed
>
> Moreover, if the *severity* of failure needs to be reflected in the
> conformance claim or associated tolerance metrics, then the failure to
> provide alt text for a main navigation item or graphical submit button
> must not be treated the same way as the failure to provide alt on some
> supporter's logo in the footer of the page.
>
> My point is that while I am all for precision, the requirements for a
> rather complex integrated human assessment of a multitude of techniques
> and failures practically rule out an atomic approach where each
> applicable test of each applicable technique is carried out sequentially
> along the steps provided and then processed according to the logical
> concatenation of techniques given in the "How to meet" document. It
> simpy would be far too cumbersome.
>
> I realise that you have not maintained that evaluation should be done
> that way - I just took your thoughts as a starting point. We have only
> just started with the EVAL task force work - I am curious what solutions
> we will arrive at to ensure rigor and mappability while still coming up
> with a manageable, doable approach.
>
> Regards,
> Detlev
>
> Am 05.08.2011 16:28, schrieb Boland Jr, Frederick E.:
>> For
>>
>> http://www.w3.org/WAI/GL/wiki/Technique_Instructions
>>
>> General Comments:
>>
>> Under "Tests" should there be guidance on limiting the number of steps
>> in a testing procedure (not making tests too involved)?
>>
>> (this gets to "what makes a good test"?
>>
>> In .. http://www.w3.org/QA/WG/2005/01/test-faq#good
>>
>> "A good test is:
>>
>>    * Mappable to the specification (you must know what portion of the
>>      specification it tests)
>>    * Atomic (tests a single feature rather than multiple features)
>>    * Self-documenting (explains what it is testing and what output it
>>      expects)
>>    * Focused on the technology under test rather than on ancillary
>>      technologies
>>    * Correct "
>>
>> Does the information under "Tests" clearly convey information in these
>> items to potential submitters?
>>
>> Furthermore, do we want to have some language somewhere in the
>> instructions that submitted techniques should not be too "complicated"
>> (should just demonstrate simple features or atomic actions if possible)?
>>
>> Editorial Comments:
>>
>> under "Techniques Writeup Checklist "UW2" should be expanded to
>> "Understanding WCAG2"
>>
>> 3^rd bullet under "applicability" has lots of typos..
>>
>> Thanks and best wishes
>>
>> Tim Boland NIST
>>
>
>

-- 
Shadi Abou-Zahra - http://www.w3.org/People/shadi/
Activity Lead, W3C/WAI International Program Office
Evaluation and Repair Tools Working Group (ERT WG)
Research and Development Working Group (RDWG)

Received on Saturday, 20 August 2011 09:02:06 UTC