Re: Draft "How To" document on evaluating a user agent for implementation of UAAG requirements available

Jon Gunderson wrote:
> 
> I have posted an initial draft of a document titled "How To Perform a User
> Agent Evaluation for the Implementation Report"[1].  I have done the first
> couple priority one checkpoints.  Please review and tell me if you thing
> this type of information is useful or that something more or different is
> needed.

Hi Jon,

Thank you for getting started on this resource. There is an
interesting mix of information in it, which I would like to
try to classify:

 1) "Normative" information that is format-specific. For example,
    for checkpoint 1.2, you have enumerated most or all of the
    event handler attributes. This is exactly the type of  
    information that will end up in the test suite test files.
    Whether there is one test file per attribute or some other
    level of granularity will be something we play with.

 2) Tips on where to look for information. For instance, for 
    the first bullet of 1.1, you wrote: "Check to see if the 
    index or table of contents for keyboard or short cut references."
    This is indeed a good tip for figuring out where one might
    find information about keyboard shortcuts.

    Another tip in 1.2 is to "ask the developer". 

Both types of information are useful, and I think of them
as ultimately being housed in two different places:

1) The test suite would include all of the format-specific
   "normative" parts. We have yet to discuss what "normative"
   means here: if the UAWG says that conditional content in
   the case of HTML is a specific list of elements and attributes,
   in what way is that normative for conformance to UAAG 1.0
   (which is not format-specific)? I suspect that one answer
   to this question is "you can conform to the test suite as well
   as to UAAG 1.0". 

2) The "How to Evaluate a User Agent" document would discuss 
   things more along the lines of "where do I find this out?"

   I have been thinking for some time that it would be useful
   to set expectations by organizing the checkpoints into at
   least the following groups:

   a) Checkpoints that can be "instantiated" for a format (e.g., HTML).
    Clearly 2.3 (conditional content) is one, and probably most of the
    checkpoints that are "for content" would fall into this category.
    These checkpoints, I suspect, lend themselves most readily to
    a test suite framework.

   b) Checkpoints that are likely to require input from the software
    developer, which may take many forms: discussion with the developer,
    documentation, assertions from Web sites, etc

    Checkpoint 1.1 (keyboard access) is an interesting case because 
    it's for both content and the user interface. So it would be
possible
    to provide test cases (e.g., for HTML, accesskey and tabindex
tests),
    but for UI functionalities, it will probably be necessary to get
    assertions from developers. Are these assertions sufficient? To date
    I have thought that they are, though always subject to change if
    the assertion proves incomplete or mistaken.

   c) Checkpoints that are external references. A number of checkpoints
refer
    to conformance to external resources, whether W3C specifications or
    operating system conventions. For example, for checkpoint 12.1, in
order
    to gauge conformance in UAAG 1.0, it will be necessary to assess 
    conformance to another specification (WCAG 1.0). This probably
another 
    set of checkpoints where developer will be at least helpful and at
most 
    indispensible.

I think it's important for the UAWG to write down what's in our head
for all the types of checkpoints. Mickey made the point well in a
teleconference
that he didn't know what the "standards" were. The proposed test suite
and other materials (such as implementation reports that are useful for
comparisons) will help answer "What does this mean in the case of SMIL
1.0?"

During this week's teleconference, I mentioned a possible test suite
framework where one could select a few options (e.g., "I want a test
suite
for HTML 4.0") and press a button and generate a format-specific test
suite. This would make it easier to do an evaluation for a give format
because
it would probably eliminate some inapplicable checkpoints.

I think that in general, it's helpful to remind developers that 
conformance is possible for some subsets of all the checkpoints. For
this
reason, when doing an evaluation, it's useful to look at, say content
types,
up front and to eliminate one or more groups of checkpoints right away.
"Since you are not trying to conform for synthesized speech output, you
can
ignore the speech checkpoints of Guideline 4."

Another useful tip: use the checklist: the priority one checkpoints
are listed first, so you can cover the most important requirements
first. If a developer understands that a user agent satisfied most of
the (applicable) priority one checkpoints, the developer may be
inspired to go for the rest.

Summary: The "how to evaluate" document should talk about the process
of doing an evaluation. The test suite should include the details
(mostly format-specific details) that should be consulted as the
reviewer carries out the process. For this reason, I don't think the
"how to evaluate" document should include the full list of checkpoints.

 - It should have pointers to the document, checklist, and
   Techniques Document.

 - It should have pointers to the test suite (which does not yet exist).

 - Checkpoints should be used as examples (as I've done above).

 - The example of how to conduct and evaluation that is part
   of section 3.1 of the Guidelines should be developed in more
   detail in the how to evaluate document.

I look forward to making this an extremely valuable resource.
Thanks again Jon!

 - Ian

> [1] http://www.w3.org/WAI/UA/implementation/how-to.html

-- 
Ian Jacobs (ij@w3.org)   http://www.w3.org/People/Jacobs
Tel:                     +1 718 260-9447

Received on Friday, 5 October 2001 15:42:11 UTC