Secondary set of issues to resolve

In the unlikely event that there is time remaining after the tasks 
are complete during the face to face meeting, there are a number of 
issues that require resolution. Given the need for broad 
participation rather than concentrated participation in resolving 
some of these issues, I would suggest we tackle these in the 
teleconferences or on the list.

Jutta

They include:
1. Do we require that all instances of a particular checkpoint be 
checked before establishing compliance, or a randomly selected 
representative set? Will this vary from checkpoint to checkpoint. 
(Eg., every prepackaged template and image, or every method of 
inserting an image.)
2. Do we want to provide sample content to create with the tool as 
part of the evaluation process. e.g., make this complex nested table 
and then check for...
3. What are the consumer comparison criteria that are not explicitly 
covered in the checkpoints and in the A, AA, AAA rating. Following 
from this, what are the personalized sorting criteria we want the 
user to have access to when comparing authoring tools using the 
evaluation database.
4. How do we give information about who the evaluators are. How do we 
screen the evaluations. How do we deal with contradictory 
evaluations.  Will we include incomplete evaluations in the database?
5. Do we want to include a classification of checkpoints that can be 
objectively vs subjectively evaluated.

Received on Friday, 29 September 2000 15:26:46 UTC