Your comments on WCAG 2.0 Last Call Draft of April 2006 (1 of 4)

Dear Al Gilman ,

Thank you for your comments on the 2006 Last Call Working Draft of the
Web Content Accessibility Guidelines 2.0 (WCAG 2.0
http://www.w3.org/TR/2006/WD-WCAG20-20060427/). We appreciate the
interest that you have taken in these guidelines.

We apologize for the delay in getting back to you. We received many
constructive comments, and sometimes addressing one issue would cause
us to revise wording covered by an earlier issue. We therefore waited
until all comments had been addressed before responding to commenters.

This message contains the comments you submitted and the resolutions
to your comments. Each comment includes a link to the archived copy of
your original comment on
http://lists.w3.org/Archives/Public/public-comments-wcag20/, and may
also include links to the relevant changes in the updated WCAG 2.0
Public Working Draft at http://www.w3.org/TR/2007/WD-WCAG20-20070517/.

PLEASE REVIEW the decisions  for the following comments and reply to
us by 7 June at public-comments-WCAG20@w3.org to say whether you are
satisfied with the decision taken. Note that this list is publicly
archived.

We also welcome your comments on the rest of the updated WCAG 2.0
Public Working Draft by 29 June 2007. We have revised the guidelines
and the accompanying documents substantially. A detailed summary of
issues, revisions, and rationales for changes is at
http://www.w3.org/WAI/GL/2007/05/change-summary.html . Please see
http://www.w3.org/WAI/ for more information about the current review.

Thank you,

Loretta Guarino Reid, WCAG WG Co-Chair
Gregg Vanderheiden, WCAG WG Co-Chair
Michael Cooper, WCAG WG Staff Contact

On behalf of the WCAG Working Group

----------------------------------------------------------
Comment 1:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-954)

A sympathetic reading suggests what you want to mean, but this text
does not say it.

As rendered, text is represented by glyphs, not Unicode characters.
Unicode characters are integer index values into the catalog of
characters.

And the critical property is the independence of any structure beyond
the stream arrangment of the characters, as demonstrated by the fact
that the character sequence is effective in conveying the intended
understanding.  The sequence of characters in the encoded form is not
suffucient.  This has been recognized and expressed by the "as
rendered" language.  But you need to make the cognitive effectiveness
aspect of the test more overt.

When natural language is written down, it is often encoded in
alphabetical writing systems or scripts.  Other forms of communication
such as mathematical notations and symbolic identifiers have re-used
the characters, or bits and pieces of these writing systems, which
have been atomized into recombinant elements by the invention of
movable type.

This issue clearly illustrates the superiority of stating separate
requirements on the as-rendered representation of the content and on
the as-communicated representation thereof.

Proposed Change:

Try:

"Text content is content which conveys its intended meaning when
rendered in a sequence of glyphs recognizable as representing the
characters from some writing system.

"Content where character sequences are used to form a symbolic code to
reconsrtruct media or action scripts, such as the Base64 encoding of a
GIF format image, or an ECMASCRIPT imperative instruction set, are to
be considered non-text content.  Likewise, character sets where the
glyphs must be presented in a particular two-dimensional arrangment
such as ASCII art are to be considered non-text content."

Add separate encoding requirement:

Text content is to be conveyed from the author's automation to the
user's automation [in accordance with | in a mannter interoperable
with] the Character Model for the World Wide Web (CharMod).

Consider lifting language from the IMS accessibility metadata
documents.  IIRC that is where I got this concept.

----------------------------
Response from Working Group:
----------------------------

The working group is trying hard to use language that is as simple as
possible. We have revised the definitions of "text" and "non-text
content" as follows:

text

    sequence of characters that can be programmatically determined,
where the sequence is expressing something in human language

non-text content

    any content that is not a sequence of characters that can be
programmatically determined or where the sequence is not expressing
something in human language

Note: This includes ASCII Art (which is a pattern of characters) and
leetspeak (which is character substitution).


----------------------------------------------------------
Comment 2:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-955)

It is hard to see how the case handled by the second sentence under
the first bullet doesn't include the case stated in the second bullet.

The only difference is that in the second bullet the information is
asked for in a label, and in the first case in a "text equivalent."
In either case the actual requirement is that this minimal information
about the non-text context item is required to be available in a way
that the user knows it is about the non-text item.

The fact that information in a label need not be replicated in a
substitutable text object is true here and is a vest-pocket example of
equivalent facilitiation.

Proposed Change:

merge two items under the pattern of "human-understandable text
explanation associated with the non-text object by a programmatically
determined (i.e. machine recognizable) association."

----------------------------
Response from Working Group:
----------------------------

Thank you for your comment. We have modified 1.1.1 as follows:

1.1.1 Non-text Content: Except for the situations listed below, a text
alternative that presents equivalent information is provided for all
non-text content.
* Controls-Input: If non-text content is a control or accepts user
input, then it has a name that describes its purpose. (See also
Guideline 4.1 Support compatibility with current and future user
agents, including assistive technologies)
* Media-Test-Sensory: If non-text content is multimedia , live
audio-only or live video-only content, a test or exercise that must be
presented in non-text format , or primarily intended to create a
specific sensory experience , then text alternatives at least identify
the non-text content with a descriptive text label. (For multimedia,
see also Guideline 1.2 Provide synchronized alternatives for
multimedia.)
* CAPTCHA: If the purpose of non-text content is to confirm that
content is being accessed by a person rather than a computer, then
text alternatives that identify and describe the purpose of the
non-text content are provided and alternative forms in different
modalities are provided to accommodate different  disabilities.
* Decoration-Formatting-Invisible: If non-text content is pure
decoration, or used only for visual formatting, or if it is not
presented to users, then it is implemented such that it can be ignored
by assistive technology.


----------------------------------------------------------
Comment 3:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-956)

There is too great a leap, here. One either has to both present the
same information and fulfil the same purpose or one only has to
provide a nominative description of the purpose in text.

There is an important middle ground.

Proposed Change:

Make it more like the priorities in WCAG1:

The strongest requirement is that the text alternative accomplish the
purpose of the non-text content it is an alternative for.  The desired
way to do this is to present the same information, but that
lower-level agreement betweeen the alternatives is at a lower level of
criticality.

Even this is not the right requirement because there needs to be
recognition that the content can succeed by affording equivalent
facilitation i.e. a go-path that succeeds for this user at a higher
level of aggregation.

----------------------------
Response from Working Group:
----------------------------

Thank you for your comment. We have modified 1.1.1 as follows:

1.1.1 Non-text Content: Except for the situations listed below, a text
alternative that presents equivalent information is provided for all
non-text content.
    * Controls-Input: If non-text content is a control or accepts user
input, then it has a name that describes its purpose. (See also
Guideline 4.1 Support compatibility with current and future user
agents, including assistive technologies)
    * Media-Test-Sensory: If non-text content is multimedia , live
audio-only or live video-only content, a test or exercise that must be
presented in non-text format , or primarily intended to create a
specific sensory experience , then text alternatives at least identify
the non-text content with a descriptive text label. (For multimedia,
see also Guideline 1.2 Provide synchronized alternatives for
multimedia.)
    * CAPTCHA: If the purpose of non-text content is to confirm that
content is being accessed by a person rather than a computer, then
text alternatives that identify and describe the purpose of the
non-text content are provided and alternative forms in different
modalities are provided to accommodate different  disabilities.
    * Decoration-Formatting-Invisible: If non-text content is pure
decoration, or used only for visual formatting, or if it is not
presented to users, then it is implemented such that it can be ignored
by assistive technology.


----------------------------------------------------------
Comment 4:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-958)

CAPTCHAs, displays that match the "if" clause of the third bullet, can
claim conformance under the first bullet by simply saying in the ALT
text "image to test if you are a human."  It is not just nor do I
believe it is the authors' intent to open up this loophole.

Proposed Change:

Close loophole by repairing the first bullet along the lines of the
previous comment.

----------------------------
Response from Working Group:
----------------------------

Thank you for your comment. We have modified 1.1.1 as follows:

1.1.1 Non-text Content: Except for the situations listed below, a text
alternative that presents equivalent information is provided for all
non-text content.
* Controls-Input: If non-text content is a control or accepts user
input, then it has a name that describes its purpose. (See also
Guideline 4.1 Support compatibility with current and future user
agents, including assistive technologies)
* Media-Test-Sensory: If non-text content is multimedia , live
audio-only or live video-only content, a test or exercise that must be
presented in non-text format , or primarily intended to create a
specific sensory experience , then text alternatives at least identify
the non-text content with a descriptive text label. (For multimedia,
see also Guideline 1.2 Provide synchronized alternatives for
multimedia.)
CAPTCHA: If the purpose of non-text content is to confirm that content
is being accessed by a person rather than a computer, then  text
alternatives that identify and describe the purpose of the non-text
content are provided and alternative forms in different modalities are
provided to accommodate different  disabilities.
* Decoration-Formatting-Invisible: If non-text content is pure
decoration, or used only for visual formatting, or if it is not
presented to users, then it is implemented such that it can be ignored
by assistive technology.


----------------------------------------------------------
Comment 5:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-959)

The non-text content must be implemented such that it can be ignored
anyway, even if the text equivalent provides full equivalent
facilitation.  You can't have the video frame-change events capturing
the AT's attention, etc.  The requirement stated here applies all the
time, not only for pure decoration.

Proposed Change:

Break out into separate requirement on the "as communicated"
representation of the content, a.k.a. the "data on the wire."

----------------------------
Response from Working Group:
----------------------------

Although this is theoretically accurate, the assistive technology does
not have settings to ignore all content. The current wording seems to
best communicate the intent.

----------------------------------------------------------
Comment 6:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-960)

The 'if' clauses don't belong in this construction.
I suspect what you had in mind was a cascade of elseIf clauses.

Proposed Change:

Change from OR (a.k.a. one of the following is true) of
* if A1 then B1
* if A2 then B2
* if A3 then B3
* etc.

to
OR of

* A1 and B1
* A2 and B2
* A3 and B3
* etc.

----------------------------
Response from Working Group:
----------------------------

We have rewritten 1.1.1 in response many comments and we think it
reads more clearly now.

----------------------------------------------------------
Comment 7:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-963)

This success criterion delivers less in the user experience than UAAG
1.0, checkpoint 3.1. UAAG makes this subject to user profiling.

Single-switch users, for example, rely on context changes that are
animated by the system, not triggered one by one by the user.

Low-vision users will come down on different sides of this question
depending on how much of the content they can see at once and how much
of the session structure they can hold in their head.

Proposed Change:

Best
Make equivalent facilitation (now 4.2) a principle. Include user
configuration of the user experience under one of the forms of
alternative recognized. State user experience requirements separately;
define these by reference to UAAG 1.0. State data-on-the-wire
requrements separately.  These have two options:

turnkey format -- player meets UAAG requirements directly.
open format -- format populates machinable content model (c.f.
rewritten 4.1) with information and actions that let UA manage and
provide this capability.

----------------------------
Response from Working Group:
----------------------------

Determining equivalent facilitation at this granularity so that it is
testable is beyond the scope of  WCAG 2. User agents and assistive
technology may present alternative renderings of the content tailored
for the user, but the author should present a base set of behaviors in
which changes of context are initiated only by user request.

----------------------------------------------------------
Comment 8:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-965)

This requirement is mis-filed in the current outline. This is a
control issue, a matter of keeping actions under the user's command.
If it were an orientation issue (principle 3) one could repair by
announcing context changes. That is not enough. In the current outline
it belongs with Principle 2.

Proposed Change:

re-flow this requirement under what the user can do, not what they can
understand.

----------------------------
Response from Working Group:
----------------------------

This success criterion is included because the unexpected change of
context is disorienting, not because it introduces barriers to
operation.

----------------------------------------------------------
Comment 9:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-968)

Exception for moving to next in tab order is not in agreement with
customary GUI behavior that the user is accustomed to and will expect.

Proposed Change:

The baseline should be customary GUI behavior, not next in TAB order.

----------------------------
Response from Working Group:
----------------------------

We have removed the exception from SC 3.2.2. Advancing to the next
field in tab order is only permitted if the user has been warned in
advance about this behavior.

----------------------------------------------------------
Comment 10:

Source: http://www.w3.org/mid/p06110403c0bf326d6713@[10.0.1.5]
(Issue ID: LC-969)

Requiring the explanation to be inline before the control is overly
restrictive. If and where the explanation gets rendered is
appropriately in the domain of the user's view and verbosity controls.
Also "authored unit contains" is too narrow. What matters is that the
server delivers the material to the browser.

Proposed Change:

Replace "authored unit contains instructions before the control that
describe the behavior" with "the delivered information about the
control includes a human-understandable explanation of the behavior
that is machine-understandable to be explanatory regarding this
control."

Align with requirements in 2.4.4 for links that there be
human-understandable stuff bearing required information that is
associated with the link or other object by an association that is
understandable by the user's automation.

----------------------------
Response from Working Group:
----------------------------

Because of the disorientation that can result from unexpected changes
of context, the working group wants the information to be presented
before the behavior is encountered. In this situation, just requiring
that the information be available in a way that the user agent can
provide it if the user asks for it seems too weak.

However, we have generalized the language of this success criterion to
allow a more flexible set of solutions to how to inform the user.

Received on Thursday, 17 May 2007 23:27:07 UTC