- From: Charles McCathieNevile <charles@w3.org>
- Date: Sat, 5 Aug 2000 14:11:56 -0400 (EDT)
- To: Jutta Treviranus <jutta.treviranus@utoronto.ca>
- cc: w3c-wai-au@w3.org
my responses interspersed - look for CMN On Tue, 1 Aug 2000, Jutta Treviranus wrote: Here are some of the questions that have come up regarding the evaluation process: 1. Just like there is a need for views of the techniques for different authoring tools, do we need a multi-layer system for the testing e.g.: if the tool is a video editing tool then fill out section x and y, CMN Yes, I definitiely think this is ideal. (The other alternative is to have something like AERT with lots of n/a that would be checked - the latter can be done as a lead-in to the former, depending on how fast I get database stuff on our system under control. 2. Do we want to structure the evaluation process using the WCAG as the primary order with the ATAG as the secondary order or the reverse. The advantage of using the WCAG or a list of possible element types as the primary order, is that we can skip the section if the tool doesn't allow the authoring of that type of element. Or do we want to generate a new order that anticipates the new WCAG. CMN I will address in a seperate thread 3. Do we want to take the approach of assessing whether the priority 1 checkpoints have been met and if they haven't, not to proceed with priorities 2 and 3, or do we want to check all levels each time? CMN If we have multiple view capability it makes sense to be able to split on priority. But in general I prefer to test all the waythrough where possible 4. What kind of reports do we want to generate? Do we want one report for the consumer and another more verbose report for the developer that also gives guidance on how to fix the problems? How should these reports be organized? 5. How much granularity do we want in the scoring system, is A/AA/AAA enough or do we want more specific scores for checkpoints or sub-checkpoints to allow ranking for consumers who are comparing tools? CMN I would like to produce at the least a piece of information that says what checkpoints have been met, as well as the overall conformance rating. I would like to link those to information/comments where possible, on a checkpoint by checkpoint basis. I will talk to Karl about this too. also: I think we want to be able to identify assesment of a checkpoint according to who did it when, and I think that it would be helpful to be able to look at all assessments we have of a single checkpoint, as a guide ot people doing new assessments. I have been buried in another work item, but I am hoping to get lots of time on this in the next few weeks. Charles
Received on Saturday, 5 August 2000 14:13:58 UTC