- From: Wendy A Chisholm <wendy@w3.org>
- Date: Wed, 23 Feb 2000 14:37:47 -0500
- To: w3c-wai-er-ig@w3.org
I took an action item on the 14 February call to find out how to determine
tracks in SMIL. It's very straightforward. Consider the following example
from the SMIL access note [1]:
<par>
<audio alt="Interview with Harvey, English audio"
src="audio.rm"/>
<video alt="Interview with Harvey" src="video.rm"/>
<textstream alt="English captions for interview with Harvey"
src="closed-caps.rt"/>
</par>
This will play an audio track, a video track and a text track (captions).
Therefore, it is fairly simple to determine if a text track is associated
with the audio/video tracks. However, this text track *could* be language
subtitles.
Consider the following example:
<par>
<audio alt="My Favorite Movie, English audio" src="audio.rm"/>
<video alt="My Favorite Movie" src="video.rm"/>
<textstream alt="Stock ticker" src="stockticker.rt"/>
<textstream alt="English captions for My Favorite Movie"
system-captions="on"
src="closed-caps.rt"/>
</par>
It uses the "system-captions" attribute to indicate to a SMIL player that
if the user wants captions this is the track to play. I don't know if we
want to get into repairing SMIL, but if we find a SMIL presentation without
the "system-captions" flag we could raise a warning.
SMIL 1.0 does not have a similar flag for auditory descriptions, although
it is being discussed for the next release. Multiple audio tracks can be
included, but they could be used for language overdubbing.
Therefore, checking for captions is currently more straightforward than
checking for an auditory description, but there are clues that you can use
to make a guess and ask the author for confirmation.
--wendy
[1] http://www.w3.org/TR/SMIL-access/
--
wendy a chisholm
world wide web consortium
web accessibility initiative
madison, wi usa
tel: +1 608 663 6346
/--
Received on Wednesday, 23 February 2000 14:33:12 UTC