Re: Step 1.e: Define the Techniques to be used

Hi AListair,

first let me answer your reporting question: of course I would put a  
note in the evaluation report for SC 2.4.1 pointing out that skip  
links were implemented but don't work (or don't become visible to  
keyboard users when receiving focus, or whatever the issue). So I  
think we probably have no differences at all regarding reporting for  
repair/correction purposes.

My point was just that following the "How to meet" document, you would  
need to check *all* listed options until you find one that meets the  
SC, and even, as Kerstin points out, check beyond that for novel not  
yet WCAG-documented techniques that might have been used to meet the  
SC. And check if any failure attached to that SC applies (which is  
often, but not always, implicitly done since many are the reverse of  
techniques).

All that does not negate the usefulness of getting a list of  
techniques from the commissioner / client and checking whether these  
do the trick. This will be useful especially for more advanced  
scripting-based stuff. For assessing conformance, however, the result  
of just checking those client-supplied techniques will probably not be  
conclusive.

However, my line would not be that due to the disclaimer at the bottom  
of every test in a WCAG technique and due to the non-normative nature  
of these techniques, they must not be referenced at all. The explicit  
purpose of documenting the techniques was to make WCAG 2.0 testable  
(hence the tests at their end).

For the practical results of the evaluation it may be secondary if a  
problem identified is clearly mapped onto a particular WCAG Technique  
- in any case, this is often difficult since most techniques will be  
used with modifications.

The tools at hand will often allow us to check at a level *above*  
individual techniques (and sometimes, *below* SC, as in 1.3.1). Take  
SC 1.1.1: you use whatever tool you choose to compare images to their  
alt texts and evaluate for all images and objects whether alt texts  
are appropriate - on their own or in the context of the surrounding  
code. Instances can then, probably with some difficulty, mapped onto  
one of the more than dozen techniques listed to meet SC 1.1.1, if this  
is an important requirement in WCAG-EM and really worth our time (=  
worth clients' money).

In all evaluations that are not simply aimed at assessing (and  
hopefully stating) conformance of a finished and honed end result, the  
developer just needs a clear understanding of the issues he /she needs  
to tackle to remedy the problems identified. For that, he/she will  
rarely take recourse to the WCAG documented techniques, so listing  
failed WCAG techniques (or failures met) is probably more a thing done  
'for the record'.

In my opinion, it is not feasible to list (and work through) *all* the  
(dozens or hundreds of) techniques employed in a site - many of them  
are bread-and-butter stuff that would be just tedious to list and tick  
off. So my hunch is that client-provided techniques will focus on  
adaptations of the trickier / scripting based stuff. Example: "We have  
implemented lightboxes based on / similar to JS Framework so-and-so  
but we have made the following adaptations...."

Best,
Detlev


On 31 May 2012, at 19:45, Alistair Garrison wrote:

> Hi Detlev,
>
> This needs to be debated in light of the other emails in the  
> previous thread.
>
> Answering your points, however, if their intention was to implement  
> a skip link to meet 2.4.1 - and the skip link was defective - would  
> you prefer:
>
> a) to understand their intention was to provide a skip link, to tell  
> them that they failed 2.4.1, and then how to correct the skip link  
> so it works; or
> b) to tell them that they passed 2.4.1 - saying that they passed by  
> some totally unintended means, possibly leaving the defective skip  
> link unreported (although I personally wouldn't say a skip link can  
> be replaced by proper structure - both are useful)…
>
> I know which I would want to hear if I commissioned the report.
>
> It does, however, show that there are most certainly many areas of  
> overlap within the sufficient techniques.
>
> All the best
>
> Alistair
>
> On 31 May 2012, at 17:22, detlev.fischer@testkreis.de wrote:
>
>> Hi Alistair, hi all,
>>
>> Don't know if it is a good idea to answer here since this now goes  
>> into the "Disposition of Comments" but I'll have a go nevertheless.
>>
>> As I understand it, we need to look for each SC if any of the  
>> Sufficient Techniques (or a set of combined techniques as expressed  
>> in the options of the "How to meet" document) has been suvessfully  
>> used. For that, it is not sufficient to test techniques being put  
>> forward by the comissioner.
>>
>> Example:
>> * Commissioner says "we have implemented skip links to meet 2.4.1  
>> Bypass Blocks"
>> * You evaluate and find that for some reason skip links aren't  
>> properly implemented (fail of that technique)
>> * There is a proper headings structure that meets SC 4.2.1 (or ARIA  
>> landmarks in a context where that is accessibility supported)
>>
>> Now as long as you don't hit a failure, I guess you woud need to  
>> say pass to the SC even though the technique submitted did not work.
>> (Having said that, the faulty skip links may fail other SC, but not  
>> SC 2.4.1).
>>
>> Any thoughts?
>>
>> Regards,
>> Detlev
>>
>> ----- Original Message -----
>> From: alistair.j.garrison@gmail.com
>> To: public-wai-evaltf@w3.org
>> Date: 31.05.2012 17:06:52
>> Subject: Fwd: Step 1.e: Define the Techniques to be used
>>
>>
>>> Dear All,
>>>
>>> Would it be possible to add my comments about Step 1.e to the  
>>> comments document - http://www.w3.org/WAI/ER/conformance/comments
>>>
>>> Begin forwarded message:
>>>
>>>> From: Alistair Garrison <alistair.j.garrison@gmail.com>
>>>> Subject: Step 1.e: Define the Techniques to be used
>>>> Date: 10 May 2012 10:48:41 CEST
>>>> To: Eval TF <public-wai-evaltf@w3.org>
>>>>
>>>> Dear All,
>>>>
>>>> "Step 1.e: Define the Techniques to be used" - could we consider  
>>>> making this step non-optional?
>>>>
>>>> The first reason being that we really need to check their  
>>>> implementation of the techniques (W3C, their own code of best  
>>>> practice or whatever) they say they use.
>>>>
>>>> For example:
>>>>
>>>> - Case 1) If they have done something by using technique A, and  
>>>> we evaluate using technique B there could be an issue (they might  
>>>> fail B);
>>>> - Case 2) If they have done something by using technique A, and  
>>>> we evaluate using technique A and B there still could be an issue  
>>>> (they might fail B);
>>>> - Case 3) If they have done something by using technique A, and  
>>>> we evaluate using technique A - it seems to work.
>>>>
>>>> The second reason being that testing seems only to be really  
>>>> replicable if we know what the techniques were they said they  
>>>> implemented - otherwise, two different teams could easily get two  
>>>> different results based on the cases above.
>>>>
>>>> I would be interested to hear your thoughts.
>>>>
>>>> Very best regards
>>>>
>>>> Alistair
>>>>
>>>
>>

-- 
Detlev Fischer
testkreis - das Accessibility-Team von feld.wald.wiese
c/o feld.wald.wiese
Borselstraße 3-7 (im Hof)
22765 Hamburg

Tel   +49 (0)40 439 10 68-3
Mobil +49 (0)1577 170 73 84
Fax   +49 (0)40 439 10 68-5

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites

Received on Friday, 1 June 2012 06:19:13 UTC