W3C home > Mailing lists > Public > public-wai-evaltf@w3.org > June 2012

RE: Success, Failure techniques - side issue for discussion

From: Vivienne CONWAY <v.conway@ecu.edu.au>
Date: Thu, 14 Jun 2012 10:14:06 +0800
To: RichardWarren <richard.warren@userite.com>, Alistair Garrison <alistair.j.garrison@gmail.com>, Eval TF <public-wai-evaltf@w3.org>
Message-ID: <8AFA77741B11DB47B24131F1E38227A9C29DAD5E54@XCHG-MS1.ads.ecu.edu.au>
Hi all

>From my perspective, asking the developer for techniques will seldom work.  Unless you have been:
- expressly given access to the developers to ask the questions
- have sufficient time to wait until they get around to answering
- are doing the evaluation with their knowledge and cooperation

In my case (for my PhD research) I have advised the website owners that their website is part of the study and is being evaluated on a regular basis.  However, apart from answering yearly surveys, they don't have any contact with me - unless they purposely contact me to let me know of up-coming changes in their results, such as a site re-development.  I am assessing their public domain websites and can only determine for myself as much as possible, what techniques they have used.

As I'm working against a list of WCAG checkpoints for A and AA, I use a variety of means including both tools and manual evaluation to test the checkpoints, look at what the result is from those tools, use AT to see what impact it might have on the user, and also consult a user testing team for their input.  So, whatever set of techniques they use is not as important as the result of whether or not that point passes WCAG 2.0 at A and AA.

For my commercial work, it is entirely different.  I have access to the website owner, but not necessarily to the developer unless the owner grants that right.  Most often the website owner will go back to the developer with questions from me when I require further input.  So, I end up in much the same situation as above.

Your thoughts are always welcome.


Regards

Vivienne L. Conway, B.IT(Hons), MACS CT, AALIA(cs)
PhD Candidate & Sessional Lecturer, Edith Cowan University, Perth, W.A.
Director, Web Key IT Pty Ltd.
v.conway@ecu.edu.au
v.conway@webkeyit.com
Mob: 0415 383 673

This email is confidential and intended only for the use of the individual or entity named above. If you are not the intended recipient, you are notified that any dissemination, distribution or copying of this email is strictly prohibited. If you have received this email in error, please notify me immediately by return email or telephone and destroy the original message.
________________________________________
From: RichardWarren [richard.warren@userite.com]
Sent: Wednesday, 13 June 2012 11:04 PM
To: Alistair Garrison; Eval TF
Subject: Re: Success, Failure techniques - side issue for discussion

Hi Alistair,

Well your suggestion is a procedure so we are on the right track. However
asking developers for the techniques they used is not (in my
experience)practical in the real world. We recently had to do a series of
eleven ministry sites in just four days - it would take four weeks (at
least) to get any  response from the full eleven teams! The primary
procedure must be stand-alone, independent, efficient, verifiable,
repeatable and as simple as possible.

Sorry
Richard

-----Original Message-----
From: Alistair Garrison
Sent: Wednesday, June 13, 2012 3:07 PM
To: RichardWarren ; Eval TF
Subject: Re: Success, Failure techniques - side issue for discussion

Dear All,

"an evaluator needs a procedure which is capable of recognising and
analysing the use (or not) of those techniques (added: and failure
conditions) whilst still being aware that there could be alternative
solutions"…

Might such a procedure be:
1) ask the web developer what techniques they used;
2) determine if these techniques broadly fulfil the relevant Success
Criteria;
3) if they do: evaluate if their selected techniques have been properly
implemented, and evaluate all relevant failure techniques; and
  if they don't: suggest further techniques, but still evaluate if their
selected techniques have been properly implemented, and evaluate all
relevant failure techniques.

You would of course need to ask for the techniques - in order to make such a
procedure reproducible.

All the best

Alistair

On 13 Jun 2012, at 15:35, RichardWarren wrote:

> Hi Shadi,
>
> Thank you - I believe that your argument re-inforces my point that we
> should concentrate on procedures for checking compliance, not solely the
> existence (or not) of certain techniques. Yes F65 says that no alt =
> failure, but H2 says that no alt is acceptable if the image is a link that
> also contains text within the anchor element.
>
> I do not think it is our task to refine  WCAG techniques etc. but rather
> it is to check for compliance with the actual GUIDELINES in practice and
> intent to ensure that the web content is accessible to all users. We thus
> need a procedure that checks first for the obvious (in this case has the
> developer used the technique of including and alt attribute and is it
> suitable? ). Only then, if the obvious technique has not been used, we
> need to include a check to see if the image is included in an anchor (or
> other similar resource) with adjacent text within that resource (H2). Or,
> indeed any other technique that ensures AT users can understand what the
> image is for/about.
>
> I am afraid that evaluation cannot be properly done by simply failing an
> issue because a certain "General Failure" applies. I still believe that
> Success and failure Techniques are primarily aimed at the web developer
> whereas an evaluator needs a procedure which is capable of recognising and
> analysing the use (or not) of those techniques whilst still being aware
> that there could be alternative solutions.
>
> If we stick stubbornly to the published techniques, and only the published
> techniques, we are in danger of stifling the development of the web.
>
> Regards
>
> Richard
>
>
>
> -----Original Message----- From: Shadi Abou-Zahra
> Sent: Wednesday, June 13, 2012 1:20 PM
> To: Richard Warren
> Cc: Eval TF
> Subject: Re: Success, Failure techniques - side issue for discussion
>
> Hi Richard,
>
> Looking at "General Failure F65" as per your example:
>
> Case 1 correctly fails because there is no alt attribute and a screen
> reader would in most cases start reading the filename. Your example
> would work if you use null alt-text as "General Failure F65" advises
> about in section "Related Techniques".
>
> Case 2 uses the alt attribute so it does not fail "General Failure F65"
> (but we can't say much more about its conformance just from F65 alone).
>
> Now this is exactly the point: by looking only at the section called
> "Tests" we miss out important context and explanations, such as the
> important reference to "Technique H67" in this example.
>
> WCAG 2.0 Techniques and Failures (as Detlev correctly points out the
> terminology should be) are far from complete or perfect. We can talk
> about how to improve them both from how they are written and to how they
> are presented to evaluators. We can also explain the concept in our
> document more clearly. I think this would get more to the core of the
> problem then by trying to re-label the sections as they are.
>
> Regards,
>  Shadi
>
>
> On 13.6.2012 13:04, RichardWarren wrote:
>> Sorry but I got my cases mixed up.
>> The last paragraphs should have read
>>
>> NOW here is the rub. – Failure F65 says that both my case 1 and H2 are
>> failures because neither use the alt attribute !!!! So if I rely on
>> Failure Techniques I would fail both my case 1 and anything using H2.
>>
>> HOWEVER – using testing procedures I can check that case 2 passes because
>> it has (reasonably) meaningful alt attributes; whilst case 1 passes
>> because it makes perfect sense when read out by my screen reader, my
>> blind testers confirm it is good, it still makes sense if the image fails
>> to display. The only thing about case 1 is that Google will not catalogue
>> the image (which might be a good thing !)
>>
>> Sorry about that – poor proof reading on my part
>> Richard
>>
>> From: RichardWarren
>> Sent: Wednesday, June 13, 2012 11:21 AM
>> To: Eval TF
>> Subject: Success, Failure techniques - side issue for discussion
>>
>> Hi.
>> I would like to drop in a  (very rough) example to explain why I am
>> concerned that we are getting hung up on the techniques used by the
>> developers rather than the procedures used by the evaluator.
>>
>> Case 1
>> <ol>
>> <li>Here is a picture of Uncle Fred wearing his bright Christmas
>> Jumper<img src=”fred.jpg”></li>
>> <li>Here is a picture of Aunt Mary setting fire to the Christmas
>> pudding<img src=”mary.jpg”</li>
>> <ol>
>>
>> Case 2
>> <ol>
>> <li><img src=”fred.jpg” alt =”Uncle Fred”></li>
>> <li><img src=”mary.jpg” alt = “Aunt Mary”>  </li>
>> </ol>
>>
>> Now case 2 employs the “alt” attribute, so it meets a success technique
>> (even though it is less informative than case 1)
>>
>> If Example 1 were links (using the<  a>  element) it would also pass
>> muster (H2 Combining adjacent image and text links), but it is not a link
>> and there is no documentation (that I know of) within WCAG about this
>> specific situation (within the<li>  element).
>>
>> NOW here is the rub. – Failure F65 says that both my example 2 and H2 are
>> failures because neither use the alt attribute !!!!   So if I rely on
>> Failure Techniques I would fail both my example 2 and anything using H2.
>>
>> HOWEVER – using testing procedures I can check that example 1 passes
>> because it has (reasonably) meaningful alt attributes;  whilst example 2
>> passes because it makes perfect sense when read out by my screen reader,
>> my blind testers confirm it is good, it still makes sense if the image
>> fails to display. The only thing about example 2 is that Google will not
>> catalogue the image (which might be a good thing !)
>>
>>
>> So I return to my original thought that step 1e should be about
>> procedures not techniques.
>>
>> Bets wishes
>> Richard
>>
>>
>>
>
> --
> Shadi Abou-Zahra - http://www.w3.org/People/shadi/
> Activity Lead, W3C/WAI International Program Office
> Evaluation and Repair Tools Working Group (ERT WG)
> Research and Development Working Group (RDWG)
>
>

This e-mail is confidential. If you are not the intended recipient you must not disclose or use the information contained within. If you have received it in error please return it to the sender via reply e-mail and delete any record of it from your system. The information contained within is not the opinion of Edith Cowan University in general and the University accepts no liability for the accuracy of the information provided.

CRICOS IPC 00279B
Received on Thursday, 14 June 2012 02:14:47 GMT

This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:52:14 GMT