SMIL: draft items for UA guidelines

WAI User Agent Guidelines Working Group & SMIL subgroup:

BACKGROUND: 
The following items concerning UA Guidelines and SMIL are from a discussion
between Philipp Hoschka (SMIL Editor & Chair) and myself. We'd like to ask
other people, including the UA SMIL sub-group and others who were
interested in ensuring that the UA guidelines include guidance on SMIL
implementation, to review and comment on these. I won't be able to join the
UA WG call this week or next at the changed time to discuss this (usually
the new time will be fine).

The format is along the lines of what I was proposing at last week's UA WG
conference call, to present first an abstract guideline, then a rationale
or explanation, then a general technique. Any additional implementation
details should be in the detailed technical appendix for the guidelines.

Among other issues, the suggested priority levels should be reviewed since
I know that the UA Working Group is trying to minimize the number of
priority one items.

DRAFT GUIDELINE FOR COMMENT:

These relate to the current section 4.4 of UA Guidelines Working Draft,
describing "Alternative Representations of Audio, Video, Movies, and
Animations."  An example of a current item in the list, which is generic
for any type of multi-media, is to "allow user to turn on/off audio
descriptions of videos, movies, and animations." The recommendations below
are specific to SMIL; additional multi-media related recommendations are
still needed for the UA Guidelines.

DEFINITION OF TERM:
"dynamic"= while presentation is playing

GUIDELINES/RATIONALE/TECHNIQUES:
1. User should be able to identify and switch text captions of audio
objects on & off. [Priority one]
- Rationale: Some users require captions anytime they are available, in
which case users should be able to set a preference in the user preferences
to view captions; other users require captions only in certain
circumstances, in which case they need a mechanism to identify when
captions are available, and to turn them on while a presentation is
playing. Mechanism for identification of captions should also function
non-visually as some users can neither hear audio files nor see captions
but can still access captions through screen-reading software and
refreshable Braille display.
- Technique: Provide user interface to switch display of media objects with
"system-caption" test attribute on and off. This must be possible both in
the static user preferences, and dynamically while the presentation is
playing.

2. User should be able to control size, color, and background color of
captions. [Priority one]
- Rationale: Some users require specific font size, color, and contrast
with caption background to be able to view captions.
- Technique: Provide user interface to change size, color, and background
color of media objects with "system-caption" test attribute. This must be
possible both in the static user preferences, and dynamically while the
presentation is playing.

3. User should be able to identify and turn on and off audio descriptions
of video objects. [Priority one]
- Rationale: Users who cannot see a video media object need a non-visual
way to identify that an audio description is available.
- Technique: Provide a standard mechanism for notifying third party
assistive technologies (e.g. screen-reading software) of the existence of
an audio description for a video object. Provide a mechanism (which can
function non-visually) for turning on or off the audio description.

4. User should be able to identify and access title, alt, and longdesc.
[Priority one]
- Rationale: Users who cannot see this information need a non-visual way to
identify and activate these elements.
- Technique: see following sections of UA Guidelines:
http://www.w3.org/WAI/UA/WD-WAI-USERAGENT.html#Alt-representations and
http://www.w3.org/WAI/UA/WD-WAI-USERAGENT.html#Alt-images [NOTE: Very
likely this item shouldn't be listed as a separate guideline in the final
guideline, but there should be some cross-linking to this item. One way to
do this would be to group scattered but related items in a appendix.]

5. Text media objects should be identifiable to third party assistive
technologies. [Priority one]
- Rationale: Users who cannot see need a non-visual way to identify that a
text media object is available to their third party assistive technology,
particularly screen readers. 
- Technique: Text media objects should use a standard mechanism for
notifying third party assistive technologies (e.g. screen-reading software]
of their availability.

6. Accessibility-related controls should be available regardless of whether
a player is stand-alone or plug-in. [Priority one]
- Rationale: Users need controls regardless of whether a player is
stand-alone or plug-in. 
- Technique: Include accessibility controls among user interface features
that are passed through the plug-in to the browser.

7. Accessibility-related information from the OS user profile should be
available to the SMIL player. [Priority one]
- Rationale: Options such as screen magnification or show-sounds are
equally necessary in a multimedia application to ensure consistency of
settings related to accessibility.
- Technique: SMIL players should inherit accessibility-related information
from the operating system user profile. 

8. User should be able to reposition captions. [Priority two]
- Rationale: Some multi-media presentations will include positioning
conflicts between captions which can obscure key visual elements of video
media objects.
- Technique: Provide mechanisms to control caption display location
dynamically and through user preferences.

9. User should be able to dynamically control rate for audio media objects.
[Priority two]
- Rationale: Users with aural-processing learning disabilities may require
a slower pace of audio; users who are experienced users of synthesized
speech may be able to tolerate a far faster pace of audio.
- Tecnique: Provide mechanism for dynamic control of presentation pace.


----------
Judy Brewer    jbrewer@w3.org    +1.617.258.9741    http://www.w3.org/WAI
Director, Web Accessibility Initiative (WAI) International Program Office
World Wide Web Consortium (W3C)
MIT/LCS Room NE3-355, 545 Technology Square, Cambridge, MA, 02139, USA

Received on Wednesday, 16 September 1998 11:32:05 UTC