- From: Alistair Garrison <alistair.j.garrison@gmail.com>
- Date: Wed, 13 Jun 2012 16:07:01 +0200
- To: RichardWarren <richard.warren@userite.com>, Eval TF <public-wai-evaltf@w3.org>
Dear All, "an evaluator needs a procedure which is capable of recognising and analysing the use (or not) of those techniques (added: and failure conditions) whilst still being aware that there could be alternative solutions"… Might such a procedure be: 1) ask the web developer what techniques they used; 2) determine if these techniques broadly fulfil the relevant Success Criteria; 3) if they do: evaluate if their selected techniques have been properly implemented, and evaluate all relevant failure techniques; and if they don't: suggest further techniques, but still evaluate if their selected techniques have been properly implemented, and evaluate all relevant failure techniques. You would of course need to ask for the techniques - in order to make such a procedure reproducible. All the best Alistair On 13 Jun 2012, at 15:35, RichardWarren wrote: > Hi Shadi, > > Thank you - I believe that your argument re-inforces my point that we should concentrate on procedures for checking compliance, not solely the existence (or not) of certain techniques. Yes F65 says that no alt = failure, but H2 says that no alt is acceptable if the image is a link that also contains text within the anchor element. > > I do not think it is our task to refine WCAG techniques etc. but rather it is to check for compliance with the actual GUIDELINES in practice and intent to ensure that the web content is accessible to all users. We thus need a procedure that checks first for the obvious (in this case has the developer used the technique of including and alt attribute and is it suitable? ). Only then, if the obvious technique has not been used, we need to include a check to see if the image is included in an anchor (or other similar resource) with adjacent text within that resource (H2). Or, indeed any other technique that ensures AT users can understand what the image is for/about. > > I am afraid that evaluation cannot be properly done by simply failing an issue because a certain "General Failure" applies. I still believe that Success and failure Techniques are primarily aimed at the web developer whereas an evaluator needs a procedure which is capable of recognising and analysing the use (or not) of those techniques whilst still being aware that there could be alternative solutions. > > If we stick stubbornly to the published techniques, and only the published techniques, we are in danger of stifling the development of the web. > > Regards > > Richard > > > > -----Original Message----- From: Shadi Abou-Zahra > Sent: Wednesday, June 13, 2012 1:20 PM > To: Richard Warren > Cc: Eval TF > Subject: Re: Success, Failure techniques - side issue for discussion > > Hi Richard, > > Looking at "General Failure F65" as per your example: > > Case 1 correctly fails because there is no alt attribute and a screen > reader would in most cases start reading the filename. Your example > would work if you use null alt-text as "General Failure F65" advises > about in section "Related Techniques". > > Case 2 uses the alt attribute so it does not fail "General Failure F65" > (but we can't say much more about its conformance just from F65 alone). > > Now this is exactly the point: by looking only at the section called > "Tests" we miss out important context and explanations, such as the > important reference to "Technique H67" in this example. > > WCAG 2.0 Techniques and Failures (as Detlev correctly points out the > terminology should be) are far from complete or perfect. We can talk > about how to improve them both from how they are written and to how they > are presented to evaluators. We can also explain the concept in our > document more clearly. I think this would get more to the core of the > problem then by trying to re-label the sections as they are. > > Regards, > Shadi > > > On 13.6.2012 13:04, RichardWarren wrote: >> Sorry but I got my cases mixed up. >> The last paragraphs should have read >> >> NOW here is the rub. – Failure F65 says that both my case 1 and H2 are failures because neither use the alt attribute !!!! So if I rely on Failure Techniques I would fail both my case 1 and anything using H2. >> >> HOWEVER – using testing procedures I can check that case 2 passes because it has (reasonably) meaningful alt attributes; whilst case 1 passes because it makes perfect sense when read out by my screen reader, my blind testers confirm it is good, it still makes sense if the image fails to display. The only thing about case 1 is that Google will not catalogue the image (which might be a good thing !) >> >> Sorry about that – poor proof reading on my part >> Richard >> >> From: RichardWarren >> Sent: Wednesday, June 13, 2012 11:21 AM >> To: Eval TF >> Subject: Success, Failure techniques - side issue for discussion >> >> Hi. >> I would like to drop in a (very rough) example to explain why I am concerned that we are getting hung up on the techniques used by the developers rather than the procedures used by the evaluator. >> >> Case 1 >> <ol> >> <li>Here is a picture of Uncle Fred wearing his bright Christmas Jumper<img src=”fred.jpg”></li> >> <li>Here is a picture of Aunt Mary setting fire to the Christmas pudding<img src=”mary.jpg”</li> >> <ol> >> >> Case 2 >> <ol> >> <li><img src=”fred.jpg” alt =”Uncle Fred”></li> >> <li><img src=”mary.jpg” alt = “Aunt Mary”> </li> >> </ol> >> >> Now case 2 employs the “alt” attribute, so it meets a success technique (even though it is less informative than case 1) >> >> If Example 1 were links (using the< a> element) it would also pass muster (H2 Combining adjacent image and text links), but it is not a link and there is no documentation (that I know of) within WCAG about this specific situation (within the<li> element). >> >> NOW here is the rub. – Failure F65 says that both my example 2 and H2 are failures because neither use the alt attribute !!!! So if I rely on Failure Techniques I would fail both my example 2 and anything using H2. >> >> HOWEVER – using testing procedures I can check that example 1 passes because it has (reasonably) meaningful alt attributes; whilst example 2 passes because it makes perfect sense when read out by my screen reader, my blind testers confirm it is good, it still makes sense if the image fails to display. The only thing about example 2 is that Google will not catalogue the image (which might be a good thing !) >> >> >> So I return to my original thought that step 1e should be about procedures not techniques. >> >> Bets wishes >> Richard >> >> >> > > -- > Shadi Abou-Zahra - http://www.w3.org/People/shadi/ > Activity Lead, W3C/WAI International Program Office > Evaluation and Repair Tools Working Group (ERT WG) > Research and Development Working Group (RDWG) > >
Received on Wednesday, 13 June 2012 14:07:42 UTC