Re: Step 1.e: Define the Techniques as used

Hi Alistair, All,

I think we may have a mismatch of terminology and basic assumptions. 
Here is my take on the thread so far (apologies for a lengthy mail):

On 10.5.2012 10:48, Alistair Garrison wrote [1]:
> - Case 1) If they have done something by using technique A, and we evaluate using technique B there could be an issue (they might fail B);
> - Case 2) If they have done something by using technique A, and we evaluate using technique A and B there still could be an issue (they might fail B);
> - Case 3) If they have done something by using technique A, and we evaluate using technique A - it seems to work.
[1] <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012May/0008>

Unless "technique B" is a failure technique, all that an evaluator can 
say when B is not met is that the conformance to *success criterion X* 
can't be confirmed. This does not mean the content does not conform to 
the success criterion, it just means the evaluator could not verify it.

With failure techniques evaluators can immediately determine failures to 
conform. It seems that Kerstin tried to explain that aspect too:
  - <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012May/0063>
  - <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012May/0065>

Issue: we (the community) need to document many more failure techniques.


On 10.5.2012 13:51, Alistair Garrison wrote [2]:
> Evaluation Commissioner - we have developed the website in accordance with our techniques which we believe deliver accessible content.
> Evaluator - well that doesn't really matter, I'm going to test your website against my checks based on my interpretation of WCAG 2.0...
[2] <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012May/0017>

Note that the evaluation commissioner role is quite different from the 
developer role. The evaluation commissioner may also not know anything 
about the development process or techniques; for example a ministry or 
similar that commissions an evaluation of a set of existing websites.

So, the dialog sequence would more likely be:

Developer: website X conforms to WCAG 2.0
Evaluation Commissioner: evaluate website X [using techniques from Y]
Evaluator:
  - response 1: confirm X conforms to WCAG 2.0 [using Y techniques]
  - response 2: found no conformance failures on X [using Y techniques]
  - response 3: found conformance failures on X [using Y techniques]

Now, in most cases it is probably in the interest of the commissioner to 
define the set of techniques, especially if they also happened to have 
commissioned the development of a website using a specific set. However, 
I'm not sure why this needs to be mandatory in all situations.

Note that response 2 is often seen to be the same as response 1 since 
evaluators don't typically like to say "I actually don't really know".

Issue: the relationship between success criteria and techniques need to 
be further clarified (probably by referencing Understanding WCAG 2.0).


On 10.5.2012 15:47, Richard Warren wrote [3]:
> Thus to check correct use of headings (1.3.1, 2.4.1, 2.4.6  and 4.1.2) I use my toolbar to list the headings on the page, and check that this list gives me an accurate overview of the page. That is my technique. If the developer disagrees with my result he can follow it through and see the results for himself. Now any further discussion between us is based upon an agreed system so all we have to do is compare our understandings of the text used in these headings. Also future evaluators can replicate my technique and check if I have done my job properly.
[3] <http://lists.w3.org/Archives/Public/public-wai-evaltf/2012May/0019>

This is a typical case of consulting where you have exchange with the 
developer. However, in most cases it is probably way more powerful to 
point to publicly documented techniques that backup your assertions. 
Ideally they would also be a set of techniques that are recognized as 
"authoritative" within the context of the evaluation (or community).

Issue: we have continual need for more publicly documented techniques.


Conclusion: I personally think the underlying issue is lack of publicly 
documented techniques (including failure techniques), especially for 
languages other than English. I also think we need to better explain how 
to use techniques and how they relate to conformance. However, non of 
these are convincing arguments to make Step 1.e mandatory in my view.


Regards,
   Shadi

-- 
Shadi Abou-Zahra - http://www.w3.org/People/shadi/
Activity Lead, W3C/WAI International Program Office
Evaluation and Repair Tools Working Group (ERT WG)
Research and Development Working Group (RDWG)

Received on Thursday, 7 June 2012 22:38:12 UTC