W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > October to December 1999

Techniques for GL 2

From: Madeleine Rothberg <Madeleine_Rothberg@wgbh.org>
Date: 2 Dec 1999 11:12:39 -0500
Message-ID: <n1267989120.32064@wgbh.org>
To: "W3C-WAI-UA" <w3c-wai-ua@w3.org>
I previously submitted these techniques with all the wrong checkpoint
numbers. Here they are, revised to match the last call draft. Also, and
more exciting, the SYMM WG has in the meantime released a public
working draft of SMIL-Boston with significant enhancements for
access. I am including techniques that reference that document,
even though it is a draft, because the changes for access seem to
be stable. For example, a test attribute for audio description has been
added that parallels the one for captioning. I expect this will survive
to become part of the final reccomendation.


2.1 Ensure that the user has access to all content, including alternative 
representations of content. [Priority 1] 
Techniques for checkpoint 2.1:
For SMIL, see specific techniques for Checkpoints 2.2, 2.4, and 2.5.

2.2 If a technology allows for more than one continuous equivalent tracks (e.g., 
closed captions, auditory descriptions, video of sign language, etc.), allow the user 
to choose from among the tracks. [Priority 1] 
Techniques for checkpoint 2.2:
Provide an interface which displays all available tracks, with as much identifying 
information as the author has provided, and allow users to choose which tracks 
are rendered. For example, if the author has provided "alt" or "title" for various 
tracks, use that information to construct the list of tracks.

Provide an interface which allows users to indicate their preferred language 
separately for each kind of continuous equivalent. Users with disabilities may need 
to choose the language they are most familiar with in order to understand a 
presentation which may not include all equivalent tracks in all desired languages. 
In addition, international users may prefer to hear the program audio in its 
original language while reading captions in their first language, fulfulling the 
function of subtitles or to improve foreign language comprehension. In 
classrooms, teachers may wish to control the language of various multimedia 
elements to achieve specific eduational goals.

2.4 Provide time-independent access to time-dependent active elements or allow 
the user to control the timing of changes. [Priority 1] 
Techniques for checkpoint 2.4:
Provide access to a static list of time dependent links, including information about 
the context of the link. For example, provide the time at which the link appeared 
along with a way to easily jump to that portion of the presentation.

Provide easy-to-use controls (including both mouse and keyboard commands) to 
allow viewers to pause the presentation and advance and rewind by small and 
large time increments.  

Provide a mode in which all active elements are highlighted in some way and can 
be navigated sequentially. For example, use a status bar to indicate the presence of 
active elements and allow the user to navigate among them with the keyboard or 
mouse to identify each element when the presentation is moving and when it is 

2.5 Allow the user to specify that continuous equivalent tracks (e.g., closed 
captions, auditory descriptions, video of sign language, etc.) be rendered at the 
same time as audio and video tracks. [Priority 1] 
Techniques for checkpoint 2.5:
It is important that any continuous equivalent tracks be rendered synchronously 
with the primary content. This ensures that users with disabilities can use the 
primary and equivalent content in combination. For example, if a hard-of-hearing 
user is watching a video and reading captions, it is important for the captions to be 
in sync with the audio so that the viewer can use any residual hearing. For audio 
description, it is crucial that the primary audio track and the audio description 
track be kept in sync to avoid having them both play at once, which would reduce 
the clarity of the presentation.

User agents which play SMIL presentations should take advantage of a variety of 
access features defined in SMIL. A W3C note on access features of SMIL 1.0 
documents those features currently recommended [reference 
http://www.w3.org/TR/SMIL-access/]. A future version of SMIL (known
currently as SMIL Boston) is in development and additional access features may
be available when this specification becomes a W3C Recommendation. The 
following techniques reference features in SMIL 1.0 as well as features included
in the November 15, 1999 public working draft of SMIL Boston 
[reference http://www.w3.org/TR/smil-boston/].

As defined in SMIL 1.0, SMIL players should allow users to turn closed captions on 
and off by implementing the test attribute system-captions which takes the values 
"on" and "off." For example, include in the player preferences a way for users to 
indicate that they wish to view captions, when available. SMIL files with captions 
available should use the following syntax:
<textstream alt="English captions for My Favorite Movie"
In this case, when the user has requested captions, this textstream should be 
rendered, and when they have not it should not be rendered. (Note that the use of 
hyphens has been deprecated in SMIL Boston. This test attribute will be known in 
upcoming recommendations as systemCaptions.)

SMIL 1.0 does not provide a test attribute to control audio description in the same 
way as captions. However, a test attribute has been proposed for SMIL Boston to 
meet this need. User agents should consider implementing systemAudioDesc  or 
be aware that when SMIL Boston becomes a W3C Recommendation they will be 
urged to implement it then. A test attribute to turn audio description on and off 
should be implemented in parallel to the implementation of the systemCaptions 
attribute. Users should be able to indicate the preference to receive audio 
description, when content authors make it available, through the standard 
preferences-setting section of the UA's user interface. 

Another test attribute, systemOverdubOrSubtitle (system-overdub-or-captions in 
SMIL 1.0), allows the user to choose between alternate language text or sound. This 
attribute specifies whether subtitles or overdub should be rendered for people who 
are watching a presentation where the audio may be in a language in which they 
are not fluent. This attribute can have two values: "overdub", which selects for 
substitution of one voice track for another, and "subtitle", which means that the 
user prefers the display of subtitles. However, this attribute should not be used to 
determine if users need captions. When both are available, deaf users will prefer to 
view captions, which contain additional information on music, sound effects, and 
who is speaking which are not included in subtitles since those are intended for 
hearing people.

User agents which play QuickTime movies should provide the user with a way to 
turn on and off the different tracks embedded in the movie. Authors may use 
these alternate tracks to provide synchronized equivalents for use by viewers with 
disabilities. The Apple QuickTime player currently provides this feature through 
the menu item "Enable Tracks."

Microsoft Windows Media Object
User agents which play Microsoft Windows Media Object presentations should 
provide support for Synchronized Accessible Media Interchange (SAMI), a protocol 
for creating and displaying caption text synchronized with a multimedia 
presentation. Users should be given a way to indicate their preference for viewing 
captions. In addition, user agents which play Microsoft Windows Media Object 
presentations should enable viewers to turn on and off other alternative 
equivalents, including audio description and alternate video tracks.

Other Formats
Other video or animation formats should incorporate similar features. At a 
minimum, users who are blind and users who are deaf need to be able to turn on 
and off audio description and captions. 

General Comments
The interface to set these preferences must be accessible. Information on how to
author accessible tracks must be included in documentation about how to author
for the media player.

2.6 If a technology allows for more than one audio track, allow the user to choose 
from among tracks. [Priority 1] 
Techniques for checkpoint 2.6:
See techniques for checkpoint 2.2.

Madeleine Rothberg
The CPB/WGBH National Center for Accessible Media
Received on Thursday, 2 December 1999 11:09:16 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 20:38:24 UTC