Comments on the UAAG (28 Jan 00 Version)

Date: 18 February 2000
To: User Agent Accessibility Guidelines List
From: Eric Hansen
Re: Comments on the User Agent Accessibility
Guidelines 1.0 (28 January 2000 Candidate
Recommendation)

The document seems to read quite well.

Following are a few comments. These comments
attempt to reconcile and harmonize this document
with the other documents (Web Content and Authoring
Tools).

====
Part 1: Issues Regarding Collated Text Transcripts
and Text Transcripts

Comment #1: Checkpoint 2.6 overlooks collated text
transcripts for movies and animations.

Importance: Essential

Collated text transcripts cannot be left out.
Refer to the more generic "presentations" rather
than "tracks" since standalone audio files are
included.
Note. I wonder if one should make explicit that the
text transcripts in checkpoint 2.6 refer to audio
presentations rather than to auditory tracks.
Perhaps it is best left unstated at this stage so
that it could refer to both, where both might be
supplied.

Old:

"2.6 Allow the user to specify that text
transcripts, captions, and auditory descriptions be
rendered at the same time as the associated
auditory and visual tracks. [Priority 1]
"Note. Respect synchronization cues during
rendering."

New:

"2.6 Allow the user to specify that text
transcripts, collated text transcripts, captions,
and auditory descriptions be rendered at the same
time as the associated auditory and visual
presentations. [Priority 1]
"Note. Respect synchronization cues during
rendering."

====

Comment #2: Lack of requirement for synchronized
presentation of collated text transcripts.

Importance: Important

Note. This issue is important to address now and to
decide one way or the other.

I note that the document does nothing to assert
that the user agent must allow users to present a
collated text transcript that is synchronized with
the auditory and visual tracks. Checkpoint 1.4 of
WCAG 1.0 suggests an a requirement such a
capability since a collated text transcript is an
important and required equivalent alternative ("1.4
For any time-based multimedia presentation (e.g., a
movie or animation), synchronize equivalent
alternatives (e.g., captions or auditory
descriptions of the visual track) with the
presentation. [Priority 1] ").

But I am not sure how and if WCAG has resolved this
issue.

However, unless the UA document explicitly requires
the option of synchronized presentation of the
collated text transcript, user agents will not
generally not provide it.

If it were determined that WCAG checkpoint 1.4 also
includes collated text transcripts, then UA
checkpoint 2.6 might read something like the
following:

New -- Taking into account synchronized collated
text transcripts:

"2.6 Allow the user to specify that text
transcripts, collated text transcripts (both
unsynchronized and synchronized), captions, and
auditory descriptions be rendered at the same time
as the associated auditory and visual
presentations. [Priority 1]
"Note. Respect synchronization cues during
rendering."

If it were determined that Priority 1 is too high
for synchronized collated text transcripts, then
one break it out into two checkpoints:

"2.6A Allow the user to specify that text
transcripts, collated text transcripts, captions,
and auditory descriptions be rendered at the same
time as the associated auditory and visual
presentations. [Priority 1]
"Note. Respect synchronization cues during
rendering."

"2.6B Allow the user to specify that collated text
transcripts be synchronized with the auditory and
visual tracks. [Priority 2 <or 3>]
"Note. Respect synchronization cues during
rendering."

An stated in earlier memos, the point of the
synchronized collated text transcript is to make
movies accessible to many who have low vision
and/or hardness of hearing.

====

Comment #3: Checkpoint 4.8 overlooks text
transcripts for audio clips and collated text
transcripts for movies and animations.

Importance: Essential

Old:

"4.8 Allow the user to configure the position of
captions on graphical displays. [Priority 1]
Techniques for checkpoint 4.8"

New:

"4.8 Allow the user to configure the position of
captions, text transcripts, and collated text
transcripts on graphical displays. [Priority 1]
Techniques for checkpoint 4.8"

====

Comment #4: Fix the definition of text transcript

Importance: Essential

Old:

"Text transcript"
"A text transcript is a text equivalent of audio
information (e.g., an auditory track). It provides
text for both spoken words and non-spoken sounds
such as sound effects. Text transcripts make
presentations accessible to people who are deaf-
blind (they may be rendered as Braille) and to
people who cannot play movies, animations, etc.
Transcripts may be generated on the fly (e.g., by
speech-to-text converters). Refer also to
captions."

New:

"Text Transcript"
"A text transcript is a text equivalent of audio
information (e.g., an audio presentation or the
auditory track of a movie or animation). It
provides text for both spoken words and non-spoken
sounds such as sound effects. Text transcripts make
audio information accessible to people who have
hearing disabilities and to people who cannot play
the audio. Text transcripts are usually pre-written
but may be generated on the fly (e.g., by speech-
to-text converters). Refer also to _captions_ and
_collated text transcript_"

====

Comment #5: Add the definition of collated text
transcript.

Importance: Essential

Following is a short definition. It could be more
extensive if necessary.

"Collated Text Transcript"

"A collated text transcript is a text equivalent of
a movie or animation. More specifically, it is a
collation of the text transcript of the auditory
track and the text equivalent of the visual track.
For example, a collated text transcript typically
includes segments of spoken dialogue interspersed
with text descriptions of the key visual elements
of a presentation (actions, body language,
graphics, and scene changes). See also _text
transcript_ and _auditory description_.

====
Part 2: Other Comments

Comment #6: Clarify checkpoint 2.2.

Importance: Important

I think that the scope and meaning of this
checkpoint needs to be better defined. What are
examples of  "presentations that require user
interaction within a specified time interval"? It
seems that solutions such as "allowing the user to
pause and restart the presentation, to slow it
down" are relevant to a lot of time-based
presentations. Why does this "configuration"
capability only encompass "presentations that
require user interaction within a specified time
interval"?

"2.2 For presentations that require user
interaction within a specified time interval, allow
the user to configure the time interval (e.g., by
allowing the user to pause and restart the
presentation, to slow it down, etc.). [Priority 1]"

====

Comment #7: Fix the introduction for Guideline 2.

Importance: Important

The second paragraph has several problems. Here it
is :

"Access to content requires more than mode
redundancy. For dynamic presentations such as
synchronized multimedia presentations created with
SMIL 1.0 [SMIL], users with cognitive, hearing,
visual, and physical disabilities may not be able
to interact with a presentation within the time
delays assumed by the author. To make the
presentation accessible to these users, user agents
rendering synchronized presentations must either
provide access to content in a time-independent
manner or allow users to configure the playback
rate of the presentation."

Problems:

a. The first sentence implies that a list of
requirements will follow, but such a list does not
seem to follow.

b. The following phrase seems inaccurate.

"To make the presentation accessible to these
users, user agents rendering synchronized
presentations must either provide access to content
in a time-independent manner or allow users to
configure the playback rate of the presentation."

At least for audio clips, movies, and animations,
isn't it the case that they need to do _both_, not
just one or the other? Text transcripts and
collated text transcripts provide access to content
a _time-independent manner_ (see checkpoint 2.1).
And checkpoint 4.5 allows _slowing_ of audio,
movies, and animations. Checkpoint 4.6 Allow the
user to start, stop, pause, advance, and rewind
audio, video, and animations.

c. Nothing in guideline 2 seems to refer to a
general capability of allowing the user to
"configure the playback rate". Checkpoint 2.2
refers only the narrow case of "presentations that
require user interaction within a specified time
interval". By the way, if some more general
capability for control over playback is desired
(beyond checkpoints 4.5, 4.6, and 2.2), then
someone should ensure that those capabilities
exist.

Again, I suppose that the key to understanding this
intro paragraph is making sure what is intended by
checkpoint 2.2. Clarifying the meaning of
checkpoint 2.2 might help rewrite this paragraph.

====

Comment #8: Disambiguate checkpoints 4.5 and 4.9.

Importance: Low to Moderate

Checkpoints 4.5 and 4.9 seem to overlap. Isn't
synthesized speech also "audio"? This may be minor
problem, just not as clear as I'd like to see it.

"Checkpoints for multimedia: "
"4.5 Allow the user to slow the presentation rate
of audio, video, and animations. [Priority 1]"
"Techniques for checkpoint 4.5"
….
"Checkpoints for synthesized speech: "
"4.9 Allow the user to configure synthesized speech
playback rate. [Priority 1]"

====

Comment #9: Fix checkpoint 5.1 regarding write
access to DOM.

Importance: Unknown

The idea of "write" access to content raises a red
flag, making a DOM novice like me wonder if such a
requirement belongs in the AU guidelines rather
than the UA guidelines. It also seems strange to
refer readers to two DOM documents. This suggests
that there is no easy and reliable way to address
this checkpoint.

"5.1 Provide programmatic read and write access to
content by conforming to W3C Document Object Model
(DOM) specifications and exporting interfaces
defined by those specifications. [Priority 1]"
"For example, refer to DOM Levels 1 and 2 ([DOM1],
[DOM2]). User agents should export these interfaces
using available operating system conventions."
"Techniques for checkpoint 5.1"

I note that there seems to be a lot of discussion
on this issue on the list. I don't think I have any
more light to throw on the subject.

====

Comment #10: Fix checkpoint 5.2.

Importance: Unknown

Checkpoint 5.2 also refers to "write" access. For
some reason it doesn't seem as much of an issue
here. Here is the language.

"5.2 Provide programmatic read and write access to
user agent user interface controls using standard
APIs (e.g., platform-independent APIs such as the
W3C DOM, standard APIs for the operating system,
and conventions for programming languages, plug-
ins, virtual machine environments, etc.)
[Priority 1]
For example, ensure that assistive technologies
have access to information about the current input
configuration so that they can trigger
functionalities through keyboard events, mouse
events, etc."

====

Comment #11: Fix first paragraph in guideline 11
(Documentation).

Importance: Important

I think that the paragraph is incorrect. The
requirement is for WCAG-compliant format. Mention
of CD-ROM, diskette, fax, and telephone is
unnecessary. It may also make someone think that
these are requirements or that they are
alternatives presented by W3C. If you want to
encourage developers to provide documentation in a
variety of ways, but it should not be stated in a
way that makes in seem like a requirement.

Old:

"Documentation includes anything that explains how
to install, get help for, use, or configure the
product. Users must have access to installation
information, either in electronic form (CD-ROM,
diskette, over the Web), by fax, or by telephone."

New:

"Documentation includes anything that explains how
to install, get help for, use, or configure the
product. At least one version of the documentation
must conform to the Web Content Accessibility
Guidelines [WAI-WEBCONTENT]."

====

Comment #12: Fix first sentence of the definition
of Active Element.

Importance: Important

Fix caps in title. Combine second paragraph into
first unless it has a distinct unifying idea.
Clarify first sentence.

Old:

"Active element"

"Active elements have associated behaviors that may
be activated (or "triggered") either through user
interaction or through scripts. Which elements are
active depends on the document language and whether
the features are supported by the user agent. In
HTML documents, for example, active elements
include links, image maps, form controls, element
instances with a value for the "longdesc"
attribute, and element instances with scripts
(event handlers) explicitly associated with them
(e.g., through the various "on" attributes). "
"An active element's behavior may be triggered
through any number of mechanisms, including the
mouse, keyboard, an API, etc. The effect of
activation depends on the element. For instance,
when a link is activated, the user agent generally
retrieves the linked resource. When a form control
is activated, it may change state (e.g., check
boxes) or may take user input (e.g., a text field).
Activating an element with a script assigned for
that particular activation mechanism (e.g., mouse
down event, key press event, etc.) causes the
script to be executed. "
"Most systems use the content focus to navigate
active elements and identify which is to be
activated."

New:

"Active element"

"Active elements <CHANGE>are elements with
behaviors </CHANGE>that may be activated (or
"triggered") either through user interaction or
through scripts. Which elements are active depends
on the document language and whether the features
are supported by the user agent. In HTML documents,
for example, active elements include links, image
maps, form controls, element instances with a value
for the "longdesc" attribute, and element instances
with scripts (event handlers) explicitly associated
with them (e.g., through the various "on"
attributes). <ADD INTO FIRST PARAGRAPH>An active
element's behavior may be triggered through any
number of mechanisms, including the mouse,
keyboard, an API, etc. The effect of activation
depends on the element. For instance, when a link
is activated, the user agent generally retrieves
the linked resource. When a form control is
activated, it may change state (e.g., check boxes)
or may take user input (e.g., a text field).
Activating an element with a script assigned for
that particular activation mechanism (e.g., mouse
down event, key press event, etc.) causes the
script to be executed. "
"Most systems use the content focus to navigate
active elements and identify which is to be
activated."

====

Comment 13: Fix definition of text equivalents (in
Equivalent Alternatives for Content) in glossary.

Importance: Essential

This definition contains an error. It cites
captions as a "non-text" equivalent, but captions
are actually a text equivalent. Delete the word
captions in the sentence listed below.

Old:

"Equivalent alternatives of content include text
equivalents (long and short, synchronized and
unsynchronized) and non-text equivalents (e.g.,
captions, auditory descriptions, a visual track
that shows sign language translation of a written
text, etc.).

New:

"Equivalent alternatives of content include text
equivalents (long and short, synchronized and
unsynchronized) and non-text equivalents (e.g.,
<WORD DELETED>auditory descriptions, a visual track
that shows sign language translation of a written
text, etc.).

====

Comment #14: Fix the definition of "Recognize".

Importance: Moderate

Old:

"Recognize"
"A user agent is said to recognize markup, content
types, or rendering effects when it can identify
(through built-in mechanisms, Document Type
Definitions (DTDs) style sheets, headers, etc) the
information. For instance, HTML 3.2 user agents may
not recognize the new elements or attributes of
HTML 4.0. Similarly, a user agent may recognize
blinking content specified by elements or
attributes, but may not recognize that an applet is
blinking. The Techniques Document [UA-TECHNIQUES]
lists some markup known to affect accessibility."

New:

"Recognize"
"A user agent is said to "recognize" markup,
content types, or rendering effects when it can
identify the information. Recognition may occur
through built-in mechanisms, Document Type
Definitions (DTDs) style sheets, headers, other
means. An example of failure of recognition is that
HTML 3.2 user agents may not recognize the new
elements or attributes of HTML 4.0. While a user
agent may recognize blinking content specified by
elements or attributes, it may not recognize
blinking in an applet. The Techniques Document [UA-
TECHNIQUES] lists some markup known to affect
accessibility."

====

Comment #15: Fix the definition of "Recognize".

Importance: Moderate

Clarify the fact that each user agent is analyzed
indpendently. The reference to "(including
communication with assistive technologies)" is
unnecessary and confusing.

Current phrasing incorrectly implies that giving
"full access" is part of the definition of user
agent. Do not say "full access" because a user
agent is _still_ a user agent even if it does _not_
provide full access.

Old:

"Abstract"
"The guidelines in this document explain to
developers how to design user agents that are
accessible to people with disabilities. User agents
include graphical desktop browsers, multimedia
players, text browsers, voice browsers, plug-ins,
and other assistive technologies that give full
access to Web content. While these guidelines
primarily address the accessibility of general-
purpose graphical user agents (including
communication with assistive technologies), the
principles presented apply to other types of user
agents as well. Following these principles will
make the Web accessible to users with disabilities
and will benefit all users."

New:

"Abstract"
"The guidelines in this document explain to
developers how to design user agents that are
accessible to people with disabilities. User agents
include graphical desktop browsers, multimedia
players, text browsers, voice browsers, plug-ins,
and other assistive technologies that <CHANGE>
provide access </CHANGE>to Web content. While these
guidelines primarily address the accessibility of
general-purpose graphical user agents <MATERIAL
DELETED>, the principles presented apply to other
types of user agents as well. Following these
principles will make the Web accessible to users
with disabilities and will benefit all users."

====

Comment #16: Make explicit the groupings of
checkpoints.

Importance: Important

The documents does not explain the significance of
the groups of checkpoints within a guideline. The
meaning and significance should be explained in
section 1.3 (How the Guidelines are Organized).

Old:

"The eleven guidelines in this document state
general principles for the development of
accessible user agents. Each guideline includes: "
? " The guideline number."
? "The statement of the guideline."
? "The rationale behind the guideline and
identification of some groups of users who
benefit from it."
? "A list of checkpoint definitions."

New:

"The eleven guidelines in this document state
general principles for the development of
accessible user agents. Each guideline includes: "
? " The guideline number."
? "The statement of the guideline."
? "The rationale behind the guideline and
identification of some groups of users who
benefit from it."
? "A list of checkpoint definitions divided into
one or more checkpoint topics [or categories?]"

"The checkpoint topics, such as "Checkpoints for
content accessibility", "Checkpoints for user
interface accessibility", etc., allowing grouping
of related checkpoints. Within each topic the
checkpoints are ordered according to their
priority, e.g., [Priority 1] before [Priority 2]."

====

Comment #17: Glossary headings and first sentences
have some inconsistencies.

Importance: Moderate

Make glossary headings and first sentences
consistent. I suggest initial caps on all words in
glossary entry name. And have the first sentence be
a complete sentence rather than just a fragment.

====
<END OF MEMO>


______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

Received on Saturday, 19 February 2000 00:01:52 UTC