Your comments on WCAG 2.0 Last Call Draft of April 2006 (4 of 4)

Comment 40:

(Issue ID: LC-1209)

This document is trying to be universal in two many dimensions at
once.  What results is impenetrable access-babble.

Proposed Change:

*executive summary*

break down the cases addressed by assertions (guidelines, success
criteria, and testable assertions in test methods) using
sub-categories both of content and of presentation.

- content: genre and task

genre: distinguish narratives, grids, mash-ups, etc. (see details for more).

task: navigating passive content, interacting wth atomic controls,
interacting with composite widgets, following multi-stage tasks,
checkout from a business process, etc.

- presentation: specialization of the delivery context, describably
using terms such as in the W3C Delivery Context Overview and the IMS
Global Learning Consortium Accessibility Specification terms for user


Testable assertions should come in collections of test conditions that
have the same content presented in multiple alternate look-and-feel
adaptations.  Check multiple assertions in each of these renderings.
Don't try to make things universal across prsentation and testable at
the same time where that is hard.  Allow yourself the ability to say
"Must A under condition B and must C under condition D, etc."

This is particularly applicable to the suitability of requred text
explanations.  It is possible by controlling the exposure of the
content to the tester through a prescribed dialog to make all
necessary judgements determinable by a lay person; not an
accessibility-knowledgeable individual.  We need to get there or go

The second axis of particularization that has been missed and needs to
be put to use is the classification of content by task and genre or
structural texture.  The structure classes:

- bag (collection of random-category items)

- collection (collection of like-category items)

- list (collection of items with an intrinsic semantic order --
alphabetical does not count)

- narrative (stream of text etc. that tells a coherent tale)

- tree (collection of content with embedded smaller, tighter collections)

- grid (two-dimensional array of content fragments)

- graph (collection of articulable objects linked by significant relationships)

.. these are structure classes that apply to regions in the content,
and guide the applicability of information requirements -- each of
these cases has its own proper short list of what needs to be in the
"what user needs to be able to understand -- through user
comprehensible media connected by machine-comprehensible

Likewise if we break out tasks such as:

- managing navigation within the page

- managing navigation around the site

- interacting with an atomic control

- interacting with a composite control (HTML forms and Windows Combo
Boxes and Dialogs are examples of the latter).

- money-free On Line Transaction Processing -- getting something to
happen at or through the server site.

- money-involving OLTP

- security-sensitive OLTP

We will have a much better handle on what the requirements are for the content.

Response from Working Group:

Thank you for your comment.  Much of what you propose here would
require a complete restructuring of the document or changing it into a
note.  Without a concrete restructuring proposal that shows how this
would happen without creating other problems, we are not able to
evaluate this clearly.  If there are specific changes that you could
suggest particularly around your idea of providing additional
information on scope or application of success criteria and techniques
we would be very happy to consider them as we move forward on evolving
the understanding and techniques documents.  We will also keep these
comments in mind as we are working on these documents and trying to
provide such information as we identify it ourselves.

Comment 41:

(Issue ID: LC-1210)

Date-qualified claims are introduced in the discussion of migration
from WCAG1 to WCAG2.  They are also of interest to sites and
appropriate to be used in staged adoption plans more generally.

Proposed Change:

Introduce and define terms for date-qualified claims earlier in the
general discussion of a cliam.

Don't wait until talking about WCAG1.

Response from Working Group:

Date-qualified claims are an interesting policy for a policy-making
organization to consider, and if we write supplementary material for
policy makers, we would include it there. But since it is a policy
issue, we have removed it from the guidelines themselves.

Comment 42:

(Issue ID: LC-1211)

What is this language trying to say?  Do you mean MUST NOT?
Editorial: CANNOT would be appropriate for an observation as to what
is feasible in the medium.  This is a remark as to what the author
believes is appropriate in this genre.

It is also neither an accessibility issue nor is it always a good idea.

Compare this with the language in Success Criterion 1.1.1 where there
are specific provisions for a case where the whole purpose of the site
is some sort of sensory or perceptual effect or test.  In that case
the content is allowed to be explained but not accorded universally
accessible equivalent facilitation.  In the case of such content, such
as the tester for Daltonism, it would be both appropriate and polite
to mention that this web presence is intended for those with
substantial visual acuity to evaluate their color perception.

Proposed Change:

Strike the remark.

Response from Working Group:

The remark has been removed.

Comment 43:

(Issue ID: LC-1212)

This is not adequate reproducibility to be the basis of
W3C-recommended interoperability.  If the test points don't repeatably
produce the desired outcome with lay testers, the statement of the
criteria is too arcane for the Web with its millions of authors.

Proposed Change:

(1) Define,
(2) take through the W3C Recommendation process, and
(3) make conformance rely on: more-concrete testable assertions.

Response from Working Group:

We have removed the reference to "people who understand WCAG". We
agree that it should require no understanding that is not available
from the guidelines and supporting documentation to determine whether
the guidelines have been satisfied.

Comment 44:

(Issue ID: LC-1213)

The wide-open nature of the baseline means that the obvious
interpretation of the W3C Candidate Recommendation phase could never
be completed because there would be other baseline profiles that
remained un-demonstrated.

Proposed Change:

Spell out an explicit experiment plan for Candidate Recommendation.
Define the baselines to be used to demonstrate the effectiveness of
these guidelines.

Make PR [a.k.a. CR exit] contingent on demonstrating the joint
statistical distribution of the proposed testable hypotheses and user
success in using live contemporary web content.

Response from Working Group:

We will be working with the W3C to define exit criteria for the
Candidate Recommendation phase that are appropriate for the WCAG 2.0
guidelines. Formal experimental design of this type is beyond the
scope of the Working Group's charter.

Comment 45:

(Issue ID: LC-1214)

What is a process in terms of WCAG conformance is unenforceably vague,
and at least in terms of the first example given, unfairly narrow.

Shopping generally progresses through browse, select, and checkout
phases.  Only the checkout is a rigidly serialized process.  And on
some sites you can get live assistance by which you could place your
order by chat.  So using a whole shopping site as an example of a
'process' which is subject to an "all or none" accessibility rule is
unduly severe.

Proposed Change:

Include an accounting for equivalent facilitation separate from the
individual testable hypotheses and integrated into the rollup of
conformance assessment.  (see next)

You might want to remark that it's not cool for a shopping site to
claim conformance for a subset of the site that doesn't let people
complete a purchase.  But don't try to fold that policy value
judgement into a W3C technical report.  Let the latter be technical.

Response from Working Group:

We have included two provisions in the rewritten conformance section
to deal with these issues.

4.) Alternate Versions: If the Web page does not meet all of the
success criteria for a specified level, then a mechanism to obtain an
alternate version that meets all of the success criteria can be
derived from the nonconforming content or its URI, and that mechanism
meets all success criteria for the specified level of conformance. The
alternate version does not need to be matched page for page with the
original (e.g. the alternative to a page may consist of multiple
pages). If multiple language versions are available, then conformant
versions are required for each language offered.

9.) Complete processes: If a Web page that is part of a process does
not conform at some level, then no conformance claim is made at that
level for any Web pages in that process.

Example: An online store has a series of pages that are used to select
and purchase products. All pages in the series from start to finish
(checkout) must conform in order to claim conformance for any page
that is part of the sequence.

We have also added the following definition for "process."


    series of user actions where each action is required in order to
complete an activity

    Example 1: A series of Web pages on a shopping site requires users
to view alternative products and prices, select products, submit an
order, provide shipping information and provide payment information.

    Example 2: An account registration page requires successful
completion of a Turing test before the registration form can be

Comment 46:

(Issue ID: LC-1215)

Some wiggle room is attempted in the statement of the individual
success criteria, but the general process for arriving at a
conformance claim satisfaction is still, as in WCAG, a "one strike and
you're out" rule.

Another way to describe it is that there is an "AND" combination of
single point test results to get the overall score.

This is a serious problem.

This kind of rollup or score-keeping is seriously out of alignment
with the general reduntant quality of natural communication including
web content.

In natural communication there is often more than one way to learn
what there is to learn from an utterance.

And in GUIs there is often more than one way to effect any given outcome.

So long as there is a go-path and the user can find it, a noGo-path
should not force a failing grade for the [subject of conformance

There is prior art in Mean Time Between Failures computations in
reliability engineering, in the handling of redundant fallback

Proposed Change:

An approach to consider:

Make the assessment of overall score or rating incorporate the
recognition of alternatives at all levels of aggregation and not just
at the leaf level.  In other words, take systematic account of
equivalent facilitation.

If there is an accessible way to learn or do what there is to learn or
do, and an accessible way to find this when other paths prove
problematic, that content should not fail as a result of the problems
with the alternate path.  It is not enough to address this in 1.1.1
and 4.2.  It needs to be global.

Response from Working Group:

While we think there is promise in using a different strategies than
Web page for providing alternative content, we were unable to develop
an approach that permitted this flexibility without introducing other
problems and loopholes. We recommend that this be investigated for
future versions of the guidelines.

Comment 47:

(Issue ID: LC-1216)

This clause is overly restrictive.  The authors appear to be fixated
on people writing public policy for the public web.  There are other
use cases which support cherry-picking requirements with full

One of these use cases is where a web development organization is
subcontracting for media fragments -- icons and background images or
the like -- from a subcontractor and writes a standard for acceptable
data packages which requires metadata to go with each including
provenance, sample ALT text, etc.  In this case there is no reason
that the customer organization should have to require all of Level A
on the piece parts purchased from the subcontractor -- the purchasor
will take care of the other aspects before putting the assembled web
content out on the web.  This requirement as now stated would make
that a violation of the Recommendation.

Proposed Change:

Soften the language from an imperative to a Recommendation.

Don't say "you mustn't cite arbitrary subsets," but rather say "if you
cite only a subset of the Level 1 Success Criteria, don't represent
this as WCAG 2.0 Conformance."

-- alternate wording...

W3C does not regard satisfying a profile of success criteria that does
not contain all the Level One success criteria to merit the term
"conformance to WCAG 2.0"

Unless a specification or policy requires at least conformance to all
Level 1 Success Criteria, do not represent that policy or
specification as implementing WCAG 2.0 conformance.

Response from Working Group:

We have made conformance claims less regulatory and more descriptive,
that is, a conformance claim describes what is conformant to the
guidelines. We think it is more appropriate for policy makers to
determine appropriate exceptions.

We have taken your suggestion and addressed the issue by making it
explicit that Level A is the minimum level for which WCAG 2
conformance can be claimed:

For level A conformance (the minimum level of conformance), the Web
page satisfies all the Level A success criteria, or the page satisfies
conformance requirement 4.

Comment 48:

(Issue ID: LC-1217)

One can reasonably interpret what the Web Characterization Terminology
meant by "simultaneously" to mean "concurrently."  The point is that
your concept is their concept, you are just straining at gnats over
the term 'simultaneously' as if it implies 'instantaneously.'

You just don't know how much street cred you lose by using funny-money
terms like "web unit" when what you mean is what the web designer
means by a "web page."

Proposed Change:

Use "web page."

State that the concept is essentially the same as in the Web
Characterization Terminology.

Add something on the order of "Owing to the increasingly dynamic
nature of web pages today, one would be more likely to say 'rendered
concurrently' rather than 'rendered simultaneously' so people don't
think that there has to be an instant rendering of a static page.  The
requirement is that fluctuations in the page view take place in a
context which is stable enough so that the user's perception is that
they are in the same place.

Response from Working Group:

We have revised the guidelines and eliminated the word "Web unit" in
favor of "Web page." We have defined "Web page" as follows (see ):

Web page

    a resource that is referenced by a URI and is not embedded in
another resource, plus any other resources that are used in the
rendering or intended to be rendered together with it

    Note: Although any "other resources" would be rendered together
with the primary resource, they would not necessarily be rendered
simultaneously with each other.

    Example 1: When you enter in your
browser you enter a movie-like interactive shopping environment where
you visually move about a store dragging products off of the shelves
around you into a visual shopping cart in front of you. Clicking on a
product causes it to be demonstrated with a specification sheet
floating alongside.

    Example 2: A Web resource including all embedded images and media.

    Example 3: A Web mail program built using Asynchronous JavaScript
and XML (AJAX). The program lives entirely at,
but includes an inbox, a contacts area and a calendar. Links or
buttons are provided that cause the the inbox, contacts, or calendar
to display, but do not change the URL of the page as a whole.

    Example 4: A customizable portal site, where users can choose
content to display from a set of different content modules.

Comment 49:

(Issue ID: LC-1218)

the concepts of 'content' and 'presentation' as used here are
indadequate to explain the needs of users with disabilities even with
today's technology.

Content v. Presentation

Since this is one of the holy cows of accessibility, and I at least
feel that this explanation is fundamentally problematical, I think
this is a fertile area for discussion.

Hypothetical dialog:


"rendering" refers to the rendered content, like an "artist's
rendering" of a planned building. This includes everything, the glyphs
for making writing evident (perceivable or recognizable) and all the
CSS-added effects such as font face, size, bold, and color that the
authors mean to imply by 'presentation.'


No, the use of 'rendering' here doesn't mean the as-rendered result,
it refers to the process of generating that result.

What's wrong with this story?

Preliminaries in terms of terminology
Both "presentation" and "rendering" are used in plain English to refer
to either the activity that transforms the content or the resulting
representation of the content at the User Interface. This ambiguity
could be resolved by more surgical word-smithing.

more serious

The transformation is entirely determined by the technology choices of
the author. So if we say 'presentation' is defined by the function of
this transformation, we are at the whim of the encoding used between
server and User Agent, and we haven't really isolated a genuine
rhetorical or semantic distinction.

If we go with "the process that generates the final representation at
the User Interface" we find that the division between content and
presentation is determined by the author's choice of file format and
the implied client-side transformation to the pixel plane or audio out

To make this clear, consider that if we define the difference between
presentation and content by the rendering transform, the
text-in-images is image content. At least the way PF has been
approaching things, this is an erroneous model. Because the text in
the image will be _recognized_ by the general user as being a sample
of written language, or some sort of code using the writing system of
a written language, that language or symbology defines a re-purposable
baseline for adapting the look and feel of the user experience to get
within the capture regime of the user's sensation, perception, and

* most serious:

The distinction is not only not defined by this language in the
document, it is not definable in any stable and fair way.

Starting at the sensible user interface, we can articulate different
deconstructions, or "content hypotheses" for what is presented to the
user. In the case of textual content, that's not a big problem, the
Unicode character set provides a good level of abstraction that

a) supports client-side personalization of the rendered form of the
writing, and b) is readily verifiable by the author as "yes, that's
what I said."

The problem is that a lot of the connotations that are communicated by
arrangement and other articulable features in the scene presented to
the user are much less readily articulated or validated by the author.
And depending on the media-critical skill and habitual vocabulary of
the knowledge engineer doing the deconstruction, you will get rather
different information graphs as an articulation of the 'content' of
the scene.

Contemporary web page design is more poster design that essay
composition. And it is more interactive than what was supported in
HTML1 or HTML2. It is the design of a richly decorated and annotated
button-board -- a collection of clickables which the site and author
think would appeal to the visitor based on what they know from the
dialog so far. But that doesn't guarantee a lot of relationship
between the different fragments mashed together into the page. If the
page does focus on one coherent story, the page will list higher in
search results. But that hasn't taken over practice in the field yet.

So the information that is implicit in the [complete, including
glyphs, etc.] rendered form of the content is a mix of that which is
readily articulable, and the author will readily recognize as
articulable, and other information that at first seems ineffable until
a skilled media critic analyzes it and presents an analysis to the
author or designer; only then can they recognize that there are
articulable properties about their stuff. If the analyst is working
from an information model of properties and relationships that are
known to survive re-purposing and re-enforce usability in the
re-purposed rendering, then this process of backing up appearances
(including sonic appearances if included) with an articulable model
will in fact enable better access, better usability in the adapted
experience. And it will also improve usability on the whole in the
un-adapted user experience, but more weakly. There will be fewer task
failures for the nominal user if the model underpinnings are not
provided than for the user who has to depend on an adapted user
experience, an off-nominal look and feel.

** so where do we go? how to do it right?

My latest attempt at a summary is the presentation I did at the
Plenary Day on 1 March.

afford functional and usable adapted views
         function: orientation --
           Where am I?
           What is there?
           What can I do?
         function: actuation --
           from keyboard and from API
           navigation: move "Where am I?
         performance: usable --
           low task failure rate confirms access to action, orientation
           reasonable task-completion time confirms structure,
             orientation, navigation

What gets into our 'content' model, where we ask authors to encode and
share the answers to selected questions, is a modeling decision driven
by what we *know* about the functional differences between PWD and
nominal use of the User Interface.

In other words, we know that the "current context" -- the user's gut
understanding of the extent of the answer to "where am I?" -- that you
can reliably keep in foreground memory on the fly in a Text-To-Speech
readout of a page is a smaller neighborhood than what the visual user
perceives as the context that they are operating in. This is why there
has to be information that supports the system prompting the user
about where they are and where they can go -- navigation assitance --
*inside* the page, more consistently and at a finer grain than the
full-vision user needs.

The screen reader user without Braille has more local neighborhoods
that have to be explainable in terms of "where am I?" and hence more
levels of "where am I?" answers in the decomposition of the page. Most
of those levels or groupings are evident in the visual presentation,
but we have to coach the author in articulating, labeling, and
encoding the industrial-strength explanation of page structure beyond
what is enforced by better/worse differences in user experience under
nominal use conditions.

This effect is readily understood in the area of orienting for "what
can I do?" This requires good labeling for hyperlinks, form controls,
and other presented objects that invite user action. This is pretty
well set out in

Both WCAG1 and WCAG2.

The model of what information to ask for as regards intra-page
structure is less well resolved in the community (there is less
endemic agreement on what this information is). We have hopes for the
"time to reach" metric analysis used in ADesigner (from IBM Japan) as
a way to communicate when and where they need to improve the markup of
intrapage structure.

The bottom line is that we should stop trying to generate
disability-blind specifics at a level as low as the Success Criteria
without getting more specific about known disability-specific
adaptations in the look and feel of the web browse experience. Analyze
the usability drivers in the adapted look-and-feel situations, and
then harmonize the content model across multiple adaptations of look
and feel including the null adaptation.

[one principle I missed in the Tech Plenary presentation:]
-- the first principle, that I did brief, is to:

- enable enough personalization so that the user can achieve a
functional user experience (function and performance as described at

-- the second principle that we have talked about with DI but did not
discuss in the brief pitch at Plenary is to:

- enable the user to achieve this with a little perturbation in the
look and feel as possible; in other words the equivalence between the
adapted and unadapted user experiences should be recognizable at as
low a level as possible. Using adaptations (equivalent facilitation)
which require a high level of abstraction (task level) to recognize
their equivalence is both necessary and OK if there is no less
invasive way to afford a functional user experience. But most of the
time there's an easier way, and while what is hard should be possible,
what is easy should indeed be easy.

Proposed Change:

Apply the suggestions under "two interfaces" comments, to wit:
Articulate requirements a) against the rendered content as it
contributes directly to the user experience b) against the content as
communicated between the server and the user agent -- the formal model
created by the format and exposed by the document object that results
from the "clean parse (see IBM suggestions there).

Enumerate information requirements that have to be satisfied by the
formal model in terms of questions that the content object must be
able to answer in response to a predictable query (programmatically
determined requirement).  Most of these are context-adapted versions
of "Where am I?  What is _there_? and What can I do?"

Response from Working Group:

As with issue LC-1204, we believe that reorganizing the guidelines in
this fashion would be much more difficult for people to understand and
use than our current structure.

Comment 50:

(Issue ID: LC-1219)

criterion is the singular of criteria. criteria is a plural noun.

Proposed Change:

use 'criterion' where singular is meant.

Response from Working Group:

Thanks. We have updated the draft accordingly.

Received on Thursday, 17 May 2007 23:27:49 UTC