Summary Report-Rating ATAG Success Criteria for Objectivity - 8 Apr 03
Webster's [REF 1] definition of "objective" is summarized and included in the following text. Objectivity
refers to being uninfluenced by emotion or surmise.
Objectivity is based on universally observable phoenomenon (something that actually exists as
opposed to something thought to exist), as well as universally measurable phoenomenon. Objectivity refers to universally verifiable reality. It may
involve universal reliability (repeatability) as well as the use of common uniform systematic and/or scientific criteria to achieve a result.
Objective testing should be unambiguously repeatable, verifiable, and measurable without question.
testing. Possible factors in the degree of objective testing may include: (1) whether the results can be verified
by machine or humans, and (2) whether both "black-box" and "white-box" testing may be needed.
Consideration in the objectivity evaluation was also given as to the purpose and formats of success criteria as
well as relationship of such criteria to checkpoints and implementation techniques.
The normative ATAG Working Draft
Working Draft of 14 March 2003. Implementation techniques used in the preparation
of this report is
Implementation Techniques for Authoring
Tool Accessibility Guidelines 2.0 - W3C Working Draft of 14 March 2003.
Below is a summary of apparent issues (from my perspective) with the success criteria given in the March 14 ATAG draft from the point of view
of objectivity, along with one or two examples illustrating the issue. Proposed solutions are also given.
Also some general comments are given on Sections 1.4 and 1.5 of the ATAG Working Draft referenced abovfe. Comments are welcome.
- Success criteria are not consistently stated.
EXAMPLE: The formats and language of success criteria for Checkpoints 1.1, 1.4, 1.6, 2.3, 2.4 and 3.5 are different. There should be a
uniform "template" for expressing the semantics of success criteria. PROPOSED SOLUTION: Create common format for expressing success criteria.
- Terminology is not consistent or well-defined. EXAMPLE: "content", "document", "element", "object property", and "markup" seem to be used without
interchangeably and without common definition (e.g., success criteria for Checkpoints 1.2 ("element", "object property"), 2.1 ("content"), 2.3 ("markup"), 3.2 ("document")).
PROPOSED SOLUTION: Come up with common definitions and use common terms that mean the same thing for the same concept.
- The kind of accessibility desired is not stated explicitly. EXAMPLE: For success criteria for Checkpoint 1.1, "accessible to the author" in what way? Visual? Auditory? Tactile?
PROPOSED SOLUTION: Be specific in kind of accessibility required.
- It is difficult to determine what is required (minimum requirements). Subjective words (like "quickly" and "easily" are used) are used.
EXAMPLE: In success criteria for Checkpoint 4.1 "easily" is used, and in success criteria for Checkpoint 1.6 "quickly" is used.
PROPOSED SOLUTION: Use words like "must do" and "required" to specify minimum requirements. NOTE: It is possible to test "should" statements
but they should be considered as options; EXAMPLE "should" and "choice of" used in success criteria for Checkpoint 1.6 could be tested as options.
- It seems like implementation techniques, informational statements, and conformance requirements are mixed in together,
which makes the objectivity of the entire success criteria statement difficult to determine.
EXAMPLE: In the success criteria for Checkpoint 1.6, the second sentence seems to be an implementation technique, whereas the first
sentence seems to be a conformance requirement, and in success criteria for Checkpoint 1.4, the second sentence is an implementation technique,
whereas the first sentence is a conformance requirement. PROPOSED SOLUTION: Only use conformance requirement language (or specify options where
applicable) in success criteria language.
- Some of the success criteria seem to be duplicates of one another, and some statements in the
success criteria appear contradictory or don't appear to belong in the same success criteria. EXAMPLE: In success criteria for Checkpoint 1.4, third sentence
appears to have no relation to the first sentence, in success criteria for Checkpoint 2.3 second sentence seems to contradict the first sentence, and
success criteria for Checkpoints 1.2, 1.3, 1.5, and 1.6 seem to be at least partial duplicates of one another.
PROPOSED SOLUTION: Ensure that semantics of each success criteria is unique, and that each sentence in a success criteria statement contributes to the meaning
of a success criteria statement in a consistent fashion, and is necessary for that meaning.
- References to WCAG are nonspecific. EXAMPLE: In success criteria for Checkpoint 3.8, "WCAG conformance" is mentioned without being specific as to
what version/date/spec of WCAG, and in success criteria for Checkpoint 4.2 the language "minimum requirements of WCAG" is used without being specific as
to version/date/spec of WCAG referenced. PROPOSED SOLUTION: Specify exactly the version/data/spec of WCAG each time WCAG is referenced in a success criteria.
Comments below pertain to perceived objectivity issues in Sections 1.3, 1.4 and 1.5 of the ATAG Working Draft mentioned above.
The words "specific to be verifiable" , "the checkpoints specify requirements for meeting the guidelines" and "a minimum basic functionality requirement that is
normative" are used in Section 1.3, the words
"Relative Priority (Level 1, 2, or 3) .." are used in Section 1.4, and the words "met the relative priority checkpoints to at least level 1" are used in Section 1.5. What do
these words mean specifically? It appears that the meaning of "relative priority" mentioned in section 1.4 and section 1.5 of the ATAG Working Draft referenced
above is not well-defined.
There is no explicit reference to success criteria in Sections 1.3, 1.4, or 1.5 of the ATAG Working Draft mentioned above.
Do the words quoted above apply to any, all or part of the success criteria?
It is not clear from reading the ATAG Working Draft referenced above. Proposed solutions might be to be specific on the meanings of the quoted words above,
to relate them to success criteria objectivity if that is what is meant, and also to develop a testing mechanism for relative priority.
[REF 1]-Webster's II New Riverside University Dictionary 1984