Re: AW: Step 1.e: Define the Techniques to be used

Hi Alistair, Kersin, and rest of the list,

I recognise the problem that any checkpoint assembling a particular set of SC-related techniques may miss some new yet undocumented technique that meets the SC. What we try is cover those techniques that are commonly used (quite a few in the case of 1.1.1). We have mapped all but very few obscure WCAG 2.0 Techniques and Failures onto our checkpoints (compare http://www.bitvtest.eu/articles/article/lesen/mapping-complete.html ).

In a sense, the actual testing of many sites  helps you to improve checkpoint procedures over time - you amend and add to checkpoints to reflect cases that weren't included, or not properly. Alistair, I doubt that if you use WCAG techniques, your site would fare badly in our test (we can try if you like). Of course keeping such a tool up-to-date is a constant effort, but either you do that or you just have WCAG and the Quickref referencing hundreds of atomic techniques, which is not the sort of environment that is conducive to any kind of effiencent workflow. You may prescribe that, but would you be surprised if people find that rather daunting and can't be asked to go through that? 

You always have to be mindful of potential other techniques nevertheless. The benchmark, to be clear, is always the SC, not any particular technique. I still feel that documenting checks in techniques (or aggregating them based on documented techniques) is a great help for evaluators and, as a side effect, also acts as an educational resource for everyone becoming aware of a11y issues and wanting pointers for producing more accessible content.

Regards,
Detlev

----- Original Message -----
From: k.probiesch@googlemail.com
To: alistair.j.garrison@gmail.com, public-wai-evaltf@w3.org
Date: 10.05.2012 13:11:42
Subject: AW: Step 1.e: Define the Techniques to be used


> Hi all,
> 
> just some cents:
> 
>> P.s. It would of course be much simpler for everyone if the web
>> developers would stick to using the W3C Sufficient Techniques and
>> Failure Conditions.
> 
> It would be simpler for testing of course, but would have a lot of
> disadvantages. Not all techniques are known and who would develop new
> techniques when it is necessary to wait until the technique is part of the
> techniques document? What would happen - just very theoretically - if there
> would be no experts who would work on the techniques document? No new
> techniques at all. And so on. I think the more severe problem until now is
> the misunderstanding of the character of the techniques in WCAG2. Sometimes
> I think it would be better and more clear if the techniques wouldn't be part
> of WCAG2.
> 
> Cheers
> 
> Kerstin 
> 
>> 
>> All the best
>> 
>> Alistair
>> 
>> On 10 May 2012, at 12:14, Detlev Fischer wrote:
>> 
>> > Hi Alistair,
>> >
>> > Your answer does not address the problems I listed.
>> >
>> > Regarding your question: We use a web-based application, BITV-Test,
>> which is based on 50 publicly documented checkpoints which reference
>> techniques, but consolidate the often similar tests in techniques to
>> achieve an efficient testing workflow.
>> >
>> > The application itself then has a page per checkpoint where you can
>> record your ratings and any comments you have for all the pages in the
>> sample.
>> >
>> > Unfortunately the application is in German but this may give you an
>> idea of one checkpoint description (one of the longest):
>> > http://testen.bitvtest.de/index.php?a=di&iid=12&s=n
>> > (This is the checkpoint for checking whether linked images have
>> adequate alternative text.)
>> >
>> > Regards,
>> > Detlev
>> >
>> >
>> >
>> >
>> > On 10 May 2012, at 11:45, Alistair Garrison wrote:
>> >
>> >> Hi Detlev,
>> >>
>> >> If you don't know what techniques have been followed, or are not
>> interested to know, what are you actually evaluating against?
>> >>
>> >> Can I just check - do you evaluate against a checklist for WCAG 2.0
>> which you have developed? or is it something else?
>> >>
>> >> All the best
>> >>
>> >> Alistair
>> >>
>> >> On 10 May 2012, at 11:42, Detlev Fischer wrote:
>> >>
>> >>> Hi Alistir,
>> >>>
>> >>> If a commissioner says: I have used this new technique for, say,
>> skipping blocks, or displaying lightboxes, it certainly makes sense to
>> report back on the success of that particular technique.
>> >>>
>> >>> However, I see several problems making this step mandatory:
>> >>>
>> >>> * In some cases, evaluators will have no access to the authors of
>> the site under test
>> >>>
>> >>> * Where do you stop? There are hundreds of techniques. Which ones
>> should be defined?
>> >>>
>> >>> * Many (most) implementations are similar to the bare-bones WCAG
>> techniques, but rarely exactly the same. Mapping adapted techniques to
>> WCAG Techniques reliably will be tricky.
>> >>>
>> >>> * Advanced script-based techniques are very difficult to check. We
>> can look at the page and check whether, say, a dynamically inserted
>> element receives keyboard focus or is hidden automatically once the kb
>> focus leaves it. But do we really need to dive into the script to see
>> how this has been implemented? (maybe this is not what you meant)
>> >>>
>> >>> I think it may be useful to tick off techniques if it is obvious
>> that they have been used (successfully or unsuccessfully), and
>> especially, tick off failures when they clearly apply (because this
>> proves that a SC has not been met in all cases, without disclaimer that
>> some other technique might have been used). HOWEVER, identifying ALL
>> techniques used during an evaluation seems a high burden. I can"t quite
>> see the benefit.
>> >>>
>> >>> Regards,
>> >>> Detlev
>> >>>
>> >>> On 10 May 2012, at 10:48, Alistair Garrison wrote:
>> >>>
>> >>>> Dear All,
>> >>>>
>> >>>> "Step 1.e: Define the Techniques to be used" - could we consider
>> making this step non-optional?
>> >>>>
>> >>>> The first reason being that we really need to check their
>> implementation of the techniques (W3C, their own code of best practice
>> or whatever) they say they use.
>> >>>>
>> >>>> For example:
>> >>>>
>> >>>> - Case 1) If they have done something by using technique A, and we
>> evaluate using technique B there could be an issue (they might fail B);
>> >>>> - Case 2) If they have done something by using technique A, and we
>> evaluate using technique A and B there still could be an issue (they
>> might fail B);
>> >>>> - Case 3) If they have done something by using technique A, and we
>> evaluate using technique A - it seems to work.
>> >>>>
>> >>>> The second reason being that testing seems only to be really
>> replicable if we know what the techniques were they said they
>> implemented - otherwise, two different teams could easily get two
>> different results based on the cases above.
>> >>>>
>> >>>> I would be interested to hear your thoughts.
>> >>>>
>> >>>> Very best regards
>> >>>>
>> >>>> Alistair
>> >>>>
>> >>>
>> >>> --
>> >>> Detlev Fischer
>> >>> testkreis - das Accessibility-Team von feld.wald.wiese
>> >>> c/o feld.wald.wiese
>> >>> Borselstraße 3-7 (im Hof)
>> >>> 22765 Hamburg
>> >>>
>> >>> Tel   +49 (0)40 439 10 68-3
>> >>> Mobil +49 (0)1577 170 73 84
>> >>> Fax   +49 (0)40 439 10 68-5
>> >>>
>> >>> http://www.testkreis.de
>> >>> Beratung, Tests und Schulungen für barrierefreie Websites
>> >>>
>> >>>
>> >>>
>> >>
>> >
>> > --
>> > Detlev Fischer
>> > testkreis - das Accessibility-Team von feld.wald.wiese
>> > c/o feld.wald.wiese
>> > Borselstraße 3-7 (im Hof)
>> > 22765 Hamburg
>> >
>> > Tel   +49 (0)40 439 10 68-3
>> > Mobil +49 (0)1577 170 73 84
>> > Fax   +49 (0)40 439 10 68-5
>> >
>> > http://www.testkreis.de
>> > Beratung, Tests und Schulungen für barrierefreie Websites
>> >
>> >
>> >
> 
> 

Received on Thursday, 10 May 2012 11:45:19 UTC