Re: [techs] Summary of Techniques teleconference 01 October 2003

> Why is it us that needs to test this and not the main group? We also need 
> to
> be aware that there are pretty serious traps waiting if you use 
> hypothetical
> people you created to test your hypothetical solutions to their 
> hypothetical
> problems. Not that it isn't a useful techniqe for desk checking, just 
> that
> the results aren't that strongly guaranteed.

Although perhaps it isnt abundently clear here, there are a couple of 
pieces of thinking behind this. Number one is Quality Assurance (QA), and 
Number two is accountability.

Yes this issue is for the wider group it just came out in the techs group, 
as did the original use cases. However we felt that in order to check (for 
QA) that we are making the correct guidlines, we thought it was appropriate 
to identify who each guidline helps. This a common practice in government 
where white papers identify target demographics of legislation. There is 
also the reserve of identifying the needs of our clients (PwDs in this 
case) and ensuring that we are satisfying all needs. The reason we are 
starting from the EO document is that the guidlines currently refer to it. 
As we formalise the use cases there has been a lot of discussion about how 
to go about it, the current use case document (that I produced) has a note 
about intenational uses cases, and needed to consult with appropriate 
people before making them. This similarly applies to the uses cases of 
PwDs, however there is significant amounts of research around identifying 
needs of PwDs and we are hoping to latch onto that and use it. This will 
also be combined with data from interviews conducted by David McDonald, and 
possibly some by myself once I get my testing group organised at the 
University of Sunderland (UK).

Use cases are not "hypothetical solutions to their hypothetical problems.", 
they are user heuristics. In this situation they will probably based around 
demographics. In both cases (interview based and demographics based) these 
are proven methods used time and again in usability a discipline of which 
accessibility is a specialist field.

To return to the second reason behind the call for these use cases, the 
processes behind WCAG 2.0 are very open. This is a great step, but part of 
that is making it easy to justify our position on issues. As such having a 
formal set of use cases which we can use to say "We did this for that group 
of people" is a highly beneficial. It puts us in a much stronger position 
to exercise our collective opinion on issues when we can back it up with 
specific reasoning on behalf of a specific group of PwDs.


> Is the Evaluation and Repair Tools group not going to be rechartered? 
> They
> would seem like the obvious place to do this work, rather than a task 
> force
> in this group.

EART are being rechartered, however this takes times. Some people in the 
WCAG Techniques task force were keen to progress making test files for the 
techniques documents. It was felt that having tests ready for the 
techniques was a useful step to helping prove their validity as 
devilevables. This is another part of the QA issues Wendy brought to the 
meeting agenda.

> You might want to look at the first draft of the testing method proposed 
> by
> EuroAccessibility as an example...
> http://www.euroaccessibility.org/EACEvaluationChecklist0a1.html
>

Thanks for the hint, we will surely look at it. There is also some 
discussion going on which will be presented to the list soon.


Thanks

Tom

Received on Saturday, 11 October 2003 20:35:46 UTC