Re: Step 1.d. Define evaluation methods

I personally have no problems with the text as it stands - but from feedback I have heard from others I know that they were uncertain how much documentation would be expected if they opted for this optional step. It could be just one short reference or extensive (time-consuming) documentation. This is what I perceived as a weakness of the step description. I had always thought of custom techniques as the case where this step would clearly be useful. 

I doubt that referencing WCAG Technique IDs would really add anything substantial since much of the assessment will depend on judgements for which not definitive rating help exists in WCAG (such as deciding whether (or when) gaps or inconsistencies in a headings hierarchy would be enough to fail SC 1.3.1). Will it help to say: "Used H42"? I can't see how... But as I said, I won't take issue with that section - was just an attempt to make what I saw as its main purpose clearer.

--
Detlev Fischer
testkreis c/o feld.wald.wiese
Thedestr. 2, 22767 Hamburg

Mobil +49 (0)1577 170 73 84
Tel +49 (0)40 439 10 68-3
Fax +49 (0)40 439 10 68-5

http://www.testkreis.de
Beratung, Tests und Schulungen für barrierefreie Websites

RichardWarren schrieb am 11.04.2014 10:42:

> I agree with Alistair,
> 
> Richard
> 
> -----Original Message----- 
> From: Alistair Garrison
> Sent: Friday, April 11, 2014 9:03 AM
> To: Detlev Fischer
> Cc: public-wai-evaltf@w3.org ; evelleman@bartimeus.nl
> Subject: Re: Step 1.d. Define evaluation methods
> 
> Hi all,
> 
> I'm not in favour of this major rewording as it is too focused on custom 
> techniques.  We must remember that this is an optional step, so if people 
> want to strengthen their claim by recording exactly how they tested a web 
> page (even if this is a simple lightweight list of references to the ids of 
> Sufficient techniques they used i.e. H67) we should let them, perhaps even 
> encourage them… it would also make future re-checking far easier as you 
> preserve the context of what was actually tested.
> 
> Just my thoughts.
> 
> Alistair
> 
> On 11 Apr 2014, at 08:46, Detlev Fischer wrote:
> 
>> Hi all,
>>
>> In my view, this optional section (step 1.d) should focus on a scenario 
>> where the evaluation commissioner uses a custom technique to meet a 
>> success criterion that is not covered by documented WCAG Techniques. Only 
>> in this case could it be useful to describe the particular methods used in 
>> evaluation. Example: A custom control in an intranet application (say, a 
>> combobox) has been implemented and made accessible with WAI-ARIA so that 
>> it works well with a particular browser and screen reader deployed by the 
>> organisation. The evaluation technique may then describe a sample task and 
>> the expected output when using the defined UAs and ATs.
>>
>> The way it stands, it is not clear to what length evaluators should go in 
>> describing their evaluation methods. If a particular testing organisation 
>> has opertationalized the testing procedure, would it be enough to 
>> reference this procedure generally, or should there be details on specific 
>> checkpoints, listing tools etc. I think this would be far too tedious and 
>> also often not really useful for users of the report (unless the test is 
>> aimed at safeguarding maximum repeatability so that other testers will 
>> arrive at the exact same results).
>>
>> So a rewording suggestion would be:
>>
>> ----------
>>
>> Step 1.d: Define Evaluation Methods to be Used (Optional)
>>
>> Methodology Requirement 1.d: When web content uses custom techniques to 
>> meet particular WCAG 2.0 Success Criteria, any specific evaluation methods 
>> used to evaluate the conformance of these custom techniques should be 
>> described (Optional).
>>
>> For custom techniques which are not part of the publicly documented 
>> (non-normative) Techniques for WCAG 2.0, it can be useful to document 
>> specific evaluation methods, for example, if these require the use of 
>> particular combinations of user agents and assisstive technolgies known to 
>> be used by the target audience of the content under test. Such evaluation 
>> methods might be made available by the site owner, their web developers, 
>> or the evaluation comissioner. Documenting specific evaluation methods to 
>> be used helps ensure consistent expectations between the evaluator and the 
>> evaluation commissioner.
>>
>> This does not limit the evaluator from using other additional evaluation 
>> methods at a later point, for example, to evaluate particular content that 
>> was not identified at this early stage of the evaluation process.
>>
>>
>> ----------
>>
>> Best,
>> Detlev
>>
>>
>> --
>> Detlev Fischer
>> testkreis c/o feld.wald.wiese
>> Thedestr. 2, 22767 Hamburg
>>
>> Mobil +49 (0)1577 170 73 84
>> Tel +49 (0)40 439 10 68-3
>> Fax +49 (0)40 439 10 68-5
>>
>> http://www.testkreis.de
>> Beratung, Tests und Schulungen für barrierefreie Websites
>>
>> Velleman, Eric schrieb am 10.04.2014 16:23:
>>
>>> Your input for Step 1.d. Define evaluation methods to be used (Optional)
>>>
>>> Relevant links:
>>> - <http://www.w3.org/TR/WCAG-EM/#step1d>
>>> - <http://www.w3.org/WAI/ER/conformance/comments-20140130#comment71>
>>>
>>> Question: Do we have to explain this section more clearly? Also see 
>>> tentative
>>> resolution in the DoC. Or is this section clear to you all?
>>>
>>>
>>> Eric
>>>
>>
>>
> 
> 
> Richard Warren
> Technical Manager
> Website Auditing Limited (Userite)
> http://www.userite.com 
> 
> 

Received on Friday, 11 April 2014 13:17:18 UTC