Re: Technology-specific checklists

Ben has raised a number of significant questions in his summary. By
way of contribution to this discussion, I have several personal
opinions to offer, as usual. These are placed before the group for
discussion and debate, as necessary.

1. I think "dynamically-generated" presentations would be preferable,
   if they can be implemented given the resources available to the
   group. For example, if I am evaluating a web site that uses a given
   combination of technologies, I should be able to generate a
   checklist that specifies all of the techniques relevant to my
   situation, with references to the success criteria that they
   satisfy. Alternatively, if I am writing an authoring tool which
   implements a given technology and I want to ascertain what
   techniques should be followed in order that my tool can produce
   highly accessible output that meets WCAG 2.0, then I will need a
   checklist that is restricted just to the technology in question.

2. Ben inquired how technologies which, individually, aren't relevant
   to all of the checkpoints should be treated. One solution would be,
   instead of taking each technology separately, to write techniques
   which presuppose that certain technologies are used in combination
   with each other. For example, XHTML is typically used with one or
   more audio or image formats, and with a style language (usually,
   but not always, CSS), so it might be better to combine the relevant
   techniques and examples under the assumption that certain types of
   technology are typically used together. If this combining could
   take place automatically it could be very flexible indeed, and such
   a possibility might be worth investigating further. CSS is a good
   example of a technology for which there are techniques, but it is
   always used in conjunction with XML or HTML. The conformance claim
   would be made in respect of the combined XML/XHTML/HTML/CSS etc.,
   content; and it should be easy for an author to obtain a checklist
   that recognizes this fact. I don't want to work through one
   checklist for XHTML, another for CSS, etc., but rather to complete
   a single, unified checklist relevant just to the technologies which
   I have chosen. Whether this desire can be practically managed
   remains an open question, but we should at least consider the
   issue.

3. Regarding the review requirements in the level 2 success criteria,
   by definition there aren't any "hard and fast" rules for
   determining whether the content has the indicated characteristics;
   this is why the success criteria say that a review must be carried
   out (taking into account the information we provide), and, on that
   basis, an opinion formed. The review should be undertaken according
   to a consistent process and may involve testing with actual users
   (cf. Appendix B of the latest internal draft). We may be able to
   provide advice in the techniques on how to conduct such reviews.
   Beyond that, I don't think definite technology-specific techniques
   can be laid down; if they can, then they ought to serve as the
   basis of further success criteria in the guidelines proper. The
   Education and Outreach working group, as I recall, has been working
   on issues of content evaluation and may have advice to give
   regarding "best practice" in the conduct of qualitative reviews.
   Also, we can offer (technology-specific) examples of what we regard
   as exemplary.

4. As already indicated, I think we should distinguish carefully
   between techniques and examples. When Wendy and I reviewed the
   guidelines last November to gain an overview of how they related to
   the techniques, we discovered that there were many issues best
   addressed in "core" techniques, without referring to any specific
   technology, but for which we could also provide technology-specific
   examples. Thus, an explanation of how to write a good text
   equivalent (checkpoint 1.1) need not refer to any particular
   technology, but we can certainly provide examples of how checkpoint
   1.1 as a whole is implemented in specific technologies, for example
   SVG or XHTML. There aren't so many points in the guidelines at
   which technology-specific advice is needed (what I mean is that
   they tend to relate to certain checkpoints in the docment, e.g.,
   under guidelines 1 and 2, less so in guidelines 3 and 4, and again
   in guideline 5, to make a broad generalization here).

5. It has sometimes been suggested that the WCAG 2.0 techniques should
   state explicitly which techniques, in which combinations, are
   regarded as satisfying specific success criteria, and which ought
   instead to be considered as alternatives. To be specific,
   implementors want to know, for example, that a given success
   criterion can be satisfied by implementing either technique A, or
   techniques B and C together. This kind of information is essential.
   I don't have a firm opinion on whether alternatives should be
   prioritized. Obviously, selection among alternative techniques is
   likely to be governed not only by technological factors (issues of
   user agent baseline, checkpoint 5.2) but also the author's design
   preferences more generally. I would argue that each technique which
   relies on a particular feature of a technology should indicate, at
   least, the version of the technology in which the relevant features
   first became available. Whether filtering can be carried out with
   respect to this information depends on how we set up the dynamic
   checklist generation system, which appears to me increasingly to
   take the form of a data base with a prescribed set of fields for
   each technique.

I think that covers many of the issues which Ben raised.

Received on Wednesday, 4 September 2002 05:53:33 UTC