Conformance Evaluation

There was a discussion in the last teleconference about providing a 
database telling whether various authoring tools meet ATAG.

I'd like to focus on one of the issues.  Do we rely on the tool providers 
to do self evaluation?  Or does the W3C take on the role of evaluating tools?

Taking off my chair hat I offer a few observations:

1. It would undoubtedly be useful for people to have an accurate account of 
whether tools meet ATAG.

2. However, some of the evaluation is subjective.  For example, 3.2 
requires "Help the author create structured content and separate 
information from its presentation."  How much help is required to say this 
is satisfied, e.g. how many of the techniques in 
http://www.w3.org/TR/ATAG10-TECHS/#check-help-provide-structure ?  Just 
one? Five? etc.   Similarly 4.1, checking for accessibility, references 
AERT, not a normative document, so evaluating satisfaction is 
subjective.  Checkpoint 5.2's "obvious and easily initiated" also worries 
me.  As a human factors engineer, I've had lots of debates with people who 
thought something was obvious and easily intiated when the users didn't.

3. W3C is supposed to be vendor neutral.

So, what I'm worried about is complaints about our subjective judgments 
favoring one company or another.

Is there a subset of strictly objective specs that avoids this problem?

OK, chair hat back on.  Any comments?

Len

p.s.
I copied this to Judy given the policy aspects.
--
Leonard R. Kasday, Ph.D.
Institute on Disabilities/UAP, and
Department of Electrical Engineering
Temple University 423 Ritter Annex, Philadelphia, PA 19122

kasday@acm.org
http://astro.temple.edu/~kasday

(215) 204-2247 (voice)  (800) 750-7428 (TTY)

The WAVE web page accessibility evaluation assistant: 
http://www.temple.edu/inst_disabilities/piat/wave/

Received on Friday, 9 June 2000 16:07:10 UTC