Organization and exposition comments on WCAG 2.0


Commenter: Al Gilman    

Email: Alfred.S.Gilman@IEEE.org

Affiliation: W3C Invited Expert

Date: see transmittal email


Directions

Please ensure that the comments submitted are as complete and "resolvable" as possible. Thank you.

1.
Document Abbv. (W2/UW/TD)

2.
Item Number (e.g. 1.1)

3.
Part of Item (Heading)
4.
Comment Type (G/T/E/Q)
5.
Comment
(Including rationale for any proposed change)
6.
Proposed Change (Be specific)
 W2 wording of principle  G/T  "content is perceiveable" is an oxymoron; the content may be unrecognizable if the features of the rendered presentation are not perceivable, but perceivable features are not an independent principle, they are just a possible failure mode for the success which is understanding what was being represented in the rendered content.

Understanding is an independent requirement, perception is an instrumentality.  Compare with how people can frequently read text where all but the first and last letters of each word are obscured beyond recognition.  This rendered content would flunk a 'perceivable' test but pass a 'recognizable' test thanks to the inherent redundancy of natural language.

The whitespace between glyphs is either perceivable or not perceivable.  The functional requirement if the glyphs are spelling some natural langugage is that the user be able to recognize the natural language represented by the glyphs in the rendered content.  The term 'recognizable' is a significantly better fit to the requirement, here, as opposed to 'perceivable.
Change to [words to the effect that]

"information is recognizable in the rendered content at the user interface -- a) [best] as configured to be presented by the author and server b) [good] as presented in a delivery context where the user employs readily-achievable adjustments to the look and feel or c) [OK when that's all that works] c) as available in a readily discovered and followed alternate path through the content." 
 W2 wording of principle  G/T  The principle is that "what one can do, others can do."

Interactive objects are an instrumentality, an intermediate level of representation.  The content can succeed by affording equivalent facilitation for widgets through menus or other widgets or voice command. 
There are two principles here:

1) What one can do, others (PWDs) can do.  [This is more likely the section head for device-independent interaction etc.]

2) Afford remediation [restore a usable look-and-feel] with as little deformation of the look-and-feel as possible.   [This is probably a separate, cross-cutting section explaining why color coding should be _visually evident_ where readily achieved, etc.]
W22.3does not fit principleGSeizures are actual harm to the user independent of whether there is any value in the Web content there to be accessed.  True, the seizures can prevent access to value in the content but before one even gets there there is harm done.Break out a separate "first, do no harm" principle and put 2.3 under it.

Introduce no impediments to people citing or implementing this provision as severable from any of the rest.
 W2   Good; but there are a few things mis-filed  move 3.2.5 to a "what user can do" section containing most of the Principle 2 material.

Move Guideline 2.4 to a "what the user needs to be able to understand" section containing most of what is presently Principle 3 material. 
 W2  wording of principle Compatibility with the future is a guaranteed "cannot complete a valid Candidate Recommendation" clause.

PF definitely sympathizes with your desire to have the content create a responsible document object model regardless of the AT uptake of the information at the moment.  We want that to.  But the way to achieve this is to have a concrete proposition as to the information that has to be available in machinable form; not to say "anything the user may infer from the rendered content must be encoded in a machine-recognizable notation."  That isn't going to work.  The way to success is, for example, take a label.  The requirement here is indicated by the dialog:

Q1: Can the user do something, here?
A1: Yes.
Q2: what can they do?
A2: <action>
Q3: what in the user experience tells them that?
A3: This <label>.

If the association of A3 as label for the action is machine recognizable, and the user should be able to recognize A2 from A3, then we are done.

That is the general pattern to be replicated across the essential information to answer questions of

"Where am I?"

"What is _there_?"

"What can I do?"

.. at a fine enough grain so that
the answers are always in the context
that the use can recall or associate
with their current place-in-browse (e.g. focussed object or reading cursor).
 
Build on contribution expected from IBM as to how to reword 4.1.  We need to connect several things: information required for an 'informed' user browse; machinable notations so UA can map to API the AT understands; format specs that afford the ability to have a shared understanding among author, author-automation, user-automation, and user.

Spell out information requirements in Principle 3 -- what the user needs to be able to understand (including about what they can do).

 
 W2 4.2  scope of guideline  G/T  equivalent facilitation does not only apply to future technologies, or only to technologies outside your baseline.  
This is a principle that applies across the board.

Compare with comments on "one strike and you're out" roll-up in the comments on the conformance scheme.
 re-write the document and conformance scheme so that it is obvious that the remedy for a potential problem may come at a different level of aggregation from where the potential problem was noticed.

For example, a CAPTCHA in the login process may be worked around by a totally different login process.  In other words the failure of access is local to the image but the affordance of accommodation or relief is through equivalence at the task, not the image, level.

For another example, a diagram in SVG may afford the user a navigable structure of focusable elements.  If these represent the notional objects in the scene, and they are suitably titled and otherwise annotated with properties and relationships, the text equvalent for the image may be provided at a lower level of aggregation by providing text associated with the parts of the image.

So the relief may be at a higher or lower level of aggregation from the perception of a possible problem (recognition of a match to the selector or applicability-condition I would call the "left-hand-side" of a rule e.g. success criterion).
 W2 entire    G/T  This document is trying to be universal in two many dimensions at once.  What results is impenetrable access-babble.  *executive summary*

break down the cases addressed by assertions (guidelines, success criteria, and testable assertions in test methods) using sub-categories both of content and of presentation.

- content: genre and task

genre: distinguish narratives, grids, mash-ups, etc. (see details for more).

task: navigating passive content, interacting wth atomic controls, interacting with composite widgets, following multi-stage tasks, checkout from a business process, etc.

- presentation: specialization of the delivery context, describably using terms such as in the W3C Delivery Context Overview and the IMS Global Learning Consortium Accessibility Specification terms for user preferences.

*details*

Testable assertions should come in collections of test conditions that have the same content presented in multiple alternate look-and-feel adaptations.  Check multiple assertions in each of these renderings.  Don't try to make things universal across prsentation and testable at the same time where that is hard.  Allow yourself the ability to say "Must A under condition B and must C under condition D, etc."

This is particularly applicable to the suitability of requred text explanations.  It is possible by controlling the exposure of the content to the tester through a prescribed dialog to make all necessary judgements determinable by a lay person; not an accessibility-knowledgeable individual.  We need to get there or go home.

The second axis of particularization that has been missed and needs to be put to use is the classification of content by task and genre or structural texture.  The structure classes:

- bag (collection of random-category items)

- collection (collection of like-category items)

- list (collection of items with an intrinsic semantic order -- alphabetical does not count)

- narrative (stream of text etc. that tells a coherent tale)

- tree (collection of content with embedded smaller, tighter collections)

- grid (two-dimensional array of content fragments)

- graph (collection of articulable objects linked by significant relationships)

.. these are structure classes that apply to regions in the content, and guide the applicability of information requirements -- each of these cases has its own proper short list of what needs to be in the "what user needs to be able to understand -- through user comprehensible media connected by machine-comprehensible associations."

Likewise if we break out tasks such as:

- managing navigation within the page

- managing navigation around the site

- interacting with an atomic control

- interacting with a composite control (HTML forms and Windows Combo Boxes and Dialogs are examples of the latter).

- money-free On Line Transaction Processing -- getting something to happen at or through the server site.

- money-involving OLTP

- security-sensitive OLTP

We will have a much better handle on what the requirements are for the content.