Notes on verifiability of checkpoints

In today's teleconference I undertook to post the following notes to
the list (they have been edited slightly to reflect recent working
group discussions).

Here are some brief comments on the verifiability of each of the WCAG
2.0 checkpoints:

1.1 The existence of a text equivalent is usually indicated explicitly
   in the content itself (e.g., by an element or attribute in a markup
   language). If this is not the case, as where multiple versions of
   the same content are provided by the author, an equivalence
   relationship between graphical and auditory presentations in one
   version, and the text in another, may be more difficult to
   establish. If there is no explicit association, in markup or a data
   model, between the auditory/graphical presentation and the
   equivalent, but the two are juxtaposed (as in a figure caption, for
   instance), it may, or may not be sufficiently clear to the reader
   that the equivalent exists. Which of the foregoing scenarios should
   be regarded as satisfying the checkpoint? Suppose instead that the
   auditory/graphical presentation is redundant (that is, the same
   information is conveyed in the text, but without referring
   explicitly to the sound/graphic). Is this sufficient?

The adequacy of the text equivalent can only be
   ascertained from knowledge of the context and the purpose of the
   content, and requires an exercise of judgment.

1.2 Synchronization may be evident from the content format itself
   (e.g., the markup language), but even if this is not so, it should
   be possible to determine (without much difficulty) whether, in fact,
   the equivalents have been synchronized. Should an upper limit be
   imposed on the time interval between the auditory/visual content and
   the synchronized text?

1.3 Same as 1.2.

1.4 Automated tools can provide heuristics to help in determining
   whether the logical structure of the content has been adequately
   represented, but in part this checkpoint demands an exercise of
   human judgment. I suspect, however, that individuals who are
   familiar with a particular markup language or other means of data
   representation, would tend to agree, in most cases, as to whether
   proper structure has been supplied. Thus the requirement is not
   "subjective", whatever that means.

1.5 This can generally be verified by simple inspection: have the
   presentational and structural aspects been represented independently
   of each other? For example, have the author-supplied presentational
   conventions been provided as style sheets? Are structural elements
   represented independently of layout operators etc.?

2.1 In many cases this should be evident enough, though the tests can
   not necessarily be automated. For example, if there is a search
   option as well as an interrelated set of links connecting the
   various documents or pages comprising a web site, the requirement
   will have been met. Similarly, if there is a separate index or table
   of contents, this checkpoint will be satisfied. If it is clear what
   types of mechanism can meet the checkpoint, it should be relatively
   easy to decide whether the requirement has been fulfilled. The
   challenge, then, is to decide, firstly, under what circumstances the
   requirement should apply, and secondly, what should be counted as
   satisfying the checkpoint.

2.2 Given the discussion and examples that follow the checkpoint, it
   should be possible to apply, but it may be difficult to extend the
   principle beyond the specific cases cited in the text of the
   guidelines. What level of consistency/predictability is required, under what
   circumstances and in what respects? The requirement is vague. Does
   it refer only to internal consistency (between content on the same
   web site or by the same author), or does it instead demand
   conformity to externally imposed conventions (and if so, what
   conventions and under what circumstances)? If what is needed,
   rather, is conformity to users' expectations, then how should the
   author determine, in the relevant respect, what users' expectations
   are likely to be (for example, the behaviour of graphical interfaces
   varies among operating systems, and even within a single operating system)?

2.3 Same as for 2.2.

2.4 The existence of a time-out should be easily verifiable, as should
   the existence of a mechanism allowing it to be deactivated.

2.5 Given a (technology-specific) list of device-independent event
   handlers for each relevant standard or interface, this should be
   easy to verify.

3.1 Same misgivings as per 2.2.

3.2 This is related to checkpoint 1.5, but here we are more concerned
   with the adequacy with which the presentation etc., would make the
   structure apparent to a prospective reader. An exercise of judgment
   will obviously be required here. This checkpoint also presupposes
   that the author will have some influence over the final
   presentation, which is not always true. Thus the requirement only
   applies to the extent that the author can influence the presentation.

3.3 As has been pointed out in working group discussions, this
   checkpoint is inadequately defined. How are "clarity" and
   "simplicity" to be assessed? What are the criteria for deciding
   whether one means of expressing an idea is clearer, or simpler, than
f  another? Obviously, each author will know whether care has been
   exercised in the preparation of the text, and whether attention has
   been paid to matters of writing style. Should this level of
   assurance be sufficient? Perhaps the requirement is more operational
   than substantive, viz., one should first satisfy oneself that the
   text is as clear and simple as practicable, given the nature of the
   content, and then arrange for it to be proofread and reviewed by
   someone else. What is the relevance of the "intended audience" in
   defining the nature of this requirement?

3.4 The main difficulties are: (1) deciding in what circumstances such
   illustrations are likely to be helpful; and (2) judging their
   appropriateness, once provided. Both of these are difficult points.

3.5 There is no clear criterion of complexity; hence it is hard to
   decide under what circumstances a summary is needed.
   Context-dependent judgment is needed, unless more definite criteria
   can be established.

3.6 This should be reasonably easy to verify in most cases: acronyms
   and abbreviations can often be identified automatically, and it
   should be reasonably clear which words are intended as technical
   terms and which are not, at least to the author.

3.7 When is division into smaller units necessary? How small must they
   be? Context-dependent judgment is required here, unless firmer
   criteria can be provided.

4.1 The issue here is: does the technology in question enable those
   checkpoints to which it is relevant, to be properly satisfied? This
   requires an evaluation of the technology in relation to the
   guidelines, a process that would necessarily involve judgment, but
   in many cases it should be a relatively straightforward technical
   task. Of course, if one uses technologies for which the W3C has
   provided "checkpoint solutions", the problem is solved. If, however,
   one wants to deploy other technologies, then one has the responsibility
   to decide whether they can be used in such a way that conformance
   with the guidelines is still possible.

4.2 This depends on the clarity of the specification regarding what
   is, and what is not, correct usage.

4.3 Same comment as per 2.5.

4.4 This can be verified experimentally by turning off the
   presentation effects in question and trying to read/interact with
   the content. 

Received on Thursday, 12 April 2001 19:39:54 UTC