evaluation process questions

Here are some of the questions that have come up regarding the 
evaluation process:

1. Just like there is a need for views of the techniques for 
different authoring tools, do we need a multi-layer system for the 
testing e.g.:
if the tool is a video editing tool then fill out section x and y,

2. Do we want to structure the evaluation process using the WCAG as 
the primary order with the ATAG as the secondary order or the 
reverse. The advantage of using the WCAG or a list of possible 
element types as the primary order, is that we can skip the section 
if the tool doesn't allow the authoring of that type of element. Or 
do we want to generate a new order that anticipates the new WCAG.

3. Do we want to take the approach of assessing whether the priority 
1 checkpoints have been met and if they haven't, not to proceed with 
priorities 2 and 3, or do we want to check all levels each time?

4. What kind of reports do we want to generate? Do we want one report 
for the consumer and another more verbose report for the developer 
that also gives guidance on how to fix the problems? How should these 
reports be organized?

5. How much granularity do we want in the scoring system, is A/AA/AAA 
enough or do we want more specific scores for checkpoints or 
sub-checkpoints to allow ranking for consumers who are comparing 
tools?

There are many more questions to address, please contribute ones that 
you can think of.

Jutta

Received on Tuesday, 1 August 2000 13:20:42 UTC