Re: Step 1.e: Define the Techniques to be used

Hi Alistir,

If a commissioner says: I have used this new technique for, say,  
skipping blocks, or displaying lightboxes, it certainly makes sense to  
report back on the success of that particular technique.

However, I see several problems making this step mandatory:

* In some cases, evaluators will have no access to the authors of the  
site under test

* Where do you stop? There are hundreds of techniques. Which ones  
should be defined?

* Many (most) implementations are similar to the bare-bones WCAG  
techniques, but rarely exactly the same. Mapping adapted techniques to  
WCAG Techniques reliably will be tricky.

* Advanced script-based techniques are very difficult to check. We can  
look at the page and check whether, say, a dynamically inserted  
element receives keyboard focus or is hidden automatically once the kb  
focus leaves it. But do we really need to dive into the script to see  
how this has been implemented? (maybe this is not what you meant)

I think it may be useful to tick off techniques if it is obvious that  
they have been used (successfully or unsuccessfully), and especially,  
tick off failures when they clearly apply (because this proves that a  
SC has not been met in all cases, without disclaimer that some other  
technique might have been used). HOWEVER, identifying ALL techniques  
used during an evaluation seems a high burden. I can"t quite see the  
benefit.

Regards,
Detlev

On 10 May 2012, at 10:48, Alistair Garrison wrote:

> Dear All,
>
> "Step 1.e: Define the Techniques to be used" - could we consider  
> making this step non-optional?
>
> The first reason being that we really need to check their  
> implementation of the techniques (W3C, their own code of best  
> practice or whatever) they say they use.
>
> For example:
>
> - Case 1) If they have done something by using technique A, and we  
> evaluate using technique B there could be an issue (they might fail  
> B);
> - Case 2) If they have done something by using technique A, and we  
> evaluate using technique A and B there still could be an issue (they  
> might fail B);
> - Case 3) If they have done something by using technique A, and we  
> evaluate using technique A - it seems to work.
>
> The second reason being that testing seems only to be really  
> replicable if we know what the techniques were they said they  
> implemented - otherwise, two different teams could easily get two  
> different results based on the cases above.
>
> I would be interested to hear your thoughts.
>
> Very best regards
>
> Alistair
>

-- 
Detlev Fischer
testkreis - das Accessibility-Team von feld.wald.wiese
c/o feld.wald.wiese
Borselstraße 3-7 (im Hof)
22765 Hamburg

Tel   +49 (0)40 439 10 68-3
Mobil +49 (0)1577 170 73 84
Fax   +49 (0)40 439 10 68-5

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites

Received on Thursday, 10 May 2012 09:34:45 UTC