Re: Success, Failure techniques - side issue for discussion

Hi Detlev, 

My approach to this work is to try and make the best evaluation methodology for use by first party and third party groups - World-wide.   This is regardless of whether or not it impacts my current business practice, which is maybe why our evaluation context is quite different… times change, and so must we...

I strongly believe in what W3C/WAI are trying to achieve - and firmly support the use of all their documentation.  Although, there is a good deal of outreach/education work still to be done.

So, I simply cannot understand why a number of the people in this W3C/WAI group talk about the WCAG 2.0 Techniques so negatively.

It is my hope (realistic or not) that when the majority of web developers understand that "the most evidential way to show Conformance with WCAG 2.0 to any evaluator is by using sufficient techniques, and not failing failure conditions" - they will swiftly move over to using the techniques, and possibly dropping some of the techniques they have been advised to use in the past.   

It is also my hope that we and our W3C/WAI WCAG 2.0 Evaluation Methodology will support them.

All the best 

Alistair

On 13 Jun 2012, at 16:24, detlev.fischer@testkreis.de wrote:

> Hi Alistair,
> 
> In our common evaluation context (mostly public domain sites), this would not work. The agencies are certainly knowledgable about a11y issues (and often get input from a pre-test in the desvelopment phase), but I doubt they think along the lines of WCAG Techniques. Do you really believe it is realistic to expect them to map the techniques they used in their design to the hundreds of WCAG Techniques?
> 
> But maybe your evaluation context is quite different, I don't know.
> 
> I think the only situation where looking at Techniques provided by the commissioner/client is helpful is in the case of novel techniques - say, some fancy HTML5 or WAI-ARIA-enhanced custom widgets. In that case, the real problem would be gauging the degree of accessibility support needed to consider the novel technique sufficient. This is not a clear-cut matter as WCAG says:
> 
> "The Working Group, therefore, limited itself to defining what constituted support and defers the judgment of how much, how many, or which AT must support a technology to the community and to entities closer to each situation that set requirements for an organization, purchase, community, etc."
> (end of http://www.w3.org/TR/UNDERSTANDING-WCAG20/conformance.html#uc-support-level-head )
> 
> Regards,
> Detlev
> 
> 
> 
> ----- Original Message -----
> From: alistair.j.garrison@gmail.com
> To: richard.warren@userite.com, public-wai-evaltf@w3.org
> Date: 13.06.2012 16:07:01
> Subject: Re: Success, Failure techniques - side issue for discussion
> 
> 
>> Dear All, 
>> 
>> "an evaluator needs a procedure which is capable of recognising and analysing the use (or not) of those techniques (added: and failure conditions) whilst still being aware that there could be alternative solutions"…
>> 
>> Might such a procedure be:
>> 1)     ask the web developer what techniques they used; 
>> 2)     determine if these techniques broadly fulfil the relevant Success Criteria;
>> 3)     if they do: evaluate if their selected techniques have been properly implemented, and evaluate all relevant failure techniques; and 
>>     if they don't: suggest further techniques, but still evaluate if their selected techniques have been properly implemented, and evaluate all relevant failure techniques.
>> 
>> You would of course need to ask for the techniques - in order to make such a procedure reproducible.
>> 
>> All the best 
>> 
>> Alistair 
>> 
>> On 13 Jun 2012, at 15:35, RichardWarren wrote:
>> 
>>> Hi Shadi,
>>> 
>>> Thank you - I believe that your argument re-inforces my point that we should concentrate on procedures for checking compliance, not solely the existence (or not) of certain techniques. Yes F65 says that no alt = failure, but H2 says that no alt is acceptable if the image is a link that also contains text within the anchor element.
>>> 
>>> I do not think it is our task to refine  WCAG techniques etc. but rather it is to check for compliance with the actual GUIDELINES in practice and intent to ensure that the web content is accessible to all users. We thus need a procedure that checks first for the obvious (in this case has the developer used the technique of including and alt attribute and is it suitable? ). Only then, if the obvious technique has not been used, we need to include a check to see if the image is included in an anchor (or other similar resource) with adjacent text within that resource (H2). Or, indeed any other technique that ensures AT users can understand what the image is for/about.
>>> 
>>> I am afraid that evaluation cannot be properly done by simply failing an issue because a certain "General Failure" applies. I still believe that Success and failure Techniques are primarily aimed at the web developer whereas an evaluator needs a procedure which is capable of recognising and analysing the use (or not) of those techniques whilst still being aware that there could be alternative solutions.
>>> 
>>> If we stick stubbornly to the published techniques, and only the published techniques, we are in danger of stifling the development of the web.
>>> 
>>> Regards
>>> 
>>> Richard
>>> 
>>> 
>>> 
>>> -----Original Message----- From: Shadi Abou-Zahra
>>> Sent: Wednesday, June 13, 2012 1:20 PM
>>> To: Richard Warren
>>> Cc: Eval TF
>>> Subject: Re: Success, Failure techniques - side issue for discussion
>>> 
>>> Hi Richard,
>>> 
>>> Looking at "General Failure F65" as per your example:
>>> 
>>> Case 1 correctly fails because there is no alt attribute and a screen
>>> reader would in most cases start reading the filename. Your example
>>> would work if you use null alt-text as "General Failure F65" advises
>>> about in section "Related Techniques".
>>> 
>>> Case 2 uses the alt attribute so it does not fail "General Failure F65"
>>> (but we can't say much more about its conformance just from F65 alone).
>>> 
>>> Now this is exactly the point: by looking only at the section called
>>> "Tests" we miss out important context and explanations, such as the
>>> important reference to "Technique H67" in this example.
>>> 
>>> WCAG 2.0 Techniques and Failures (as Detlev correctly points out the
>>> terminology should be) are far from complete or perfect. We can talk
>>> about how to improve them both from how they are written and to how they
>>> are presented to evaluators. We can also explain the concept in our
>>> document more clearly. I think this would get more to the core of the
>>> problem then by trying to re-label the sections as they are.
>>> 
>>> Regards,
>>> Shadi
>>> 
>>> 
>>> On 13.6.2012 13:04, RichardWarren wrote:
>>>> Sorry but I got my cases mixed up.
>>>> The last paragraphs should have read
>>>> 
>>>> NOW here is the rub. – Failure F65 says that both my case 1 and H2 are failures because neither use the alt attribute !!!! So if I rely on Failure Techniques I would fail both my case 1 and anything using H2.
>>>> 
>>>> HOWEVER – using testing procedures I can check that case 2 passes because it has (reasonably) meaningful alt attributes; whilst case 1 passes because it makes perfect sense when read out by my screen reader, my blind testers confirm it is good, it still makes sense if the image fails to display. The only thing about case 1 is that Google will not catalogue the image (which might be a good thing !)
>>>> 
>>>> Sorry about that – poor proof reading on my part
>>>> Richard
>>>> 
>>>> From: RichardWarren
>>>> Sent: Wednesday, June 13, 2012 11:21 AM
>>>> To: Eval TF
>>>> Subject: Success, Failure techniques - side issue for discussion
>>>> 
>>>> Hi.
>>>> I would like to drop in a  (very rough) example to explain why I am concerned that we are getting hung up on the techniques used by the developers rather than the procedures used by the evaluator.
>>>> 
>>>> Case 1
>>>> <ol>
>>>> <li>Here is a picture of Uncle Fred wearing his bright Christmas Jumper<img src=”fred.jpg”></li>
>>>> <li>Here is a picture of Aunt Mary setting fire to the Christmas pudding<img src=”mary.jpg”</li>
>>>> <ol>
>>>> 
>>>> Case 2
>>>> <ol>
>>>> <li><img src=”fred.jpg” alt =”Uncle Fred”></li>
>>>> <li><img src=”mary.jpg” alt = “Aunt Mary”>  </li>
>>>> </ol>
>>>> 
>>>> Now case 2 employs the “alt” attribute, so it meets a success technique (even though it is less informative than case 1)
>>>> 
>>>> If Example 1 were links (using the<  a>  element) it would also pass muster (H2 Combining adjacent image and text links), but it is not a link and there is no documentation (that I know of) within WCAG about this specific situation (within the<li>  element).
>>>> 
>>>> NOW here is the rub. – Failure F65 says that both my example 2 and H2 are failures because neither use the alt attribute !!!!   So if I rely on Failure Techniques I would fail both my example 2 and anything using H2.
>>>> 
>>>> HOWEVER – using testing procedures I can check that example 1 passes because it has (reasonably) meaningful alt attributes;  whilst example 2 passes because it makes perfect sense when read out by my screen reader, my blind testers confirm it is good, it still makes sense if the image fails to display. The only thing about example 2 is that Google will not catalogue the image (which might be a good thing !)
>>>> 
>>>> 
>>>> So I return to my original thought that step 1e should be about procedures not techniques.
>>>> 
>>>> Bets wishes
>>>> Richard
>>>> 
>>>> 
>>>> 
>>> 
>>> -- 
>>> Shadi Abou-Zahra - http://www.w3.org/People/shadi/
>>> Activity Lead, W3C/WAI International Program Office
>>> Evaluation and Repair Tools Working Group (ERT WG)
>>> Research and Development Working Group (RDWG) 
>>> 
>>> 
>> 
> 

Received on Wednesday, 13 June 2012 15:10:32 UTC