My comments on the evaluation process

Dear all,

And here is my third (and last) message on the 2012-02-09 editor's draft.
The subject of this message is the evaluation process (chapter 5).

My general view on evaluation is that this methodology should be quite
prescriptive concerning the details of the evaluation process for
individual pages (or fragments of pages). And I also believe that this is
going to be a methodology from the W3C and as such it should point to
concepts defined by W3C/WAI (such as techniques, failures, ...) and not to
other concepts that were not defined (such as barriers). I think that it
makes perfect sense for a W3C methodology to explicitly refer to an
evaluation process that is based on the techniques and failures that have
been published by the W3C.

Detailed comments on chapter 5:

   - [5.1] One idea. If both automatic tools and human evaluators are used,
   and if the methodology manages workgroup-based evaluation, then the tools
   can be considered as members of the team of evaluators.
   - [5.3] I really would like the methodology to be as prescriptive as
   possible here. I agree that it should not explain how to evaluate each
   individual technique or failure, but it should provide guidance about what
   to evaluate, when, how to combine individual results, which individual
   results are considered (pass, fail, not applicable, unknown, partial,...).
   And it should explain what to do if one wants to obtain a conformity
   attestation (yes/no answer) or some metric (score).
   - [5.4] I have troubles with this clause, as the concept of "barrier" is
   not considered at all in WCAG. If the methodology uses the concept of
   barriers, then it should provide full guidance on them. For instance,
   relationship between barriers and success criteria, assessment of severity
   of barriers, and so on.
   - [5.5. Last sentence] This is good, but only for the "random" sample.

Best regards,
Loďc

Received on Tuesday, 21 February 2012 10:58:44 UTC