- From: Charles McCathieNevile <charles@w3.org>
- Date: Tue, 20 Apr 1999 12:59:09 -0400 (EDT)
- To: Eric Hansen <eghansen@yahoo.com>
- cc: w3c-wai-gl@w3.org
I still feel that the way in which the auditory description is produced
belongs in techniques. At the moment, the effective method is to use a
data file (typically in a dedicated audio format) and an audio player on
the client side. In a few years the most effective method will be to use a
data file with a different format (namely text, which happens to compress
well, and be useful for a number of other tasks) with an audio player (a
text to speech synthesiser, to be precise).
My personal feeling is that the additions are unnecessary complication.
But I don't think it's a show-stopper.
Charles McCN
On Mon, 19 Apr 1999, Eric Hansen wrote:
There are several bugs in checkpoint 1.3 regarding auditory
description.
{EH: Revision 3 - Suggested Revision. I have changed my view since
posting an earlier version of this document on 4/18/99.}
{EH: Important Bugs:
(1) Please note the 4/16/99 version of the checkpoint incorrectly
implies that the players will generate "prerecorded" audio since
according to the glossary definition, "auditory description" pertains
to prerecorded audio.
(2) Changed to include animations. I am not sure if there is any good
reason why animations should not be included as well as movies; note
that checkpoint 1.4 would seem to refer to the possibility of auditory
equivalents for animations.
(3) This checkpoint was changed to Priority 2; it can't be a Priority 1
since a text equivalent has already been provided per checkpoint 1.1.
(4) Instead of referring to capabilities of "video players" in general,
this checkpoint should refer to the user agents used by individuals who
are blind; this checkpoint is crucial only for individuals who are
blind and therefore only needs to refer to individuals who are blind.
Fortunately, Web users who are blind often have highly capable user
agents that are well-suited to taking text equivalents and rendering
them to speech.}
1.3 Until most user agents used by individuals who are blind can use a
text equivalent of the time-based visual track of an audio-visual
presentation to generate a synchronized synthesized-speech equivalent,
provide an auditory description (i.e., a prerecorded auditory
equivalent of the visual {EH: Did I coin a term here? Any better
suggestions?} track that is synchronized with the audio track).
[Priority 2].
Note that checkpoints 1.1, 1.3., and 1.4 all relate to time-based
audio-visual presentations such as movies and animations with
accompanying audio. Checkpoint 1.1 requires a text equivalent of the
visual track (video or animation) and checkpoint 1.4 requires that it
be synchronized with the presentation. As noted, checkpoint 1.3 the
requirement for the auditory description (which is a specialized,
synchronized auditory equivalent of the visual track) is only in effect
temporarily.
Refer also to checkpoint 1.1 and checkpoint 1.4.
Techniques for checkpoint 1.3
==
Old 4/16/99 Version of Checkpoint 1.3
1.3 Until most video player technologies can generate an auditory
description of a video track from a text equivalent, provide an
auditory description of the video track (synchronized with the audio
track). [Priority 1]
Refer also to checkpoint 1.1 and checkpoint 1.4.
Techniques for checkpoint 1.3
===
Eric G. Hansen
Development Scientist, Educational Testing Service
ETS 12-R
Rosedale Road
Princeton, NJ 08541
(W) 609-734-5615, (Fax) 609-734-1090
Internet: ehansen@ets.org
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
--Charles McCathieNevile mailto:charles@w3.org
phone: +1 617 258 0992 http://www.w3.org/People/Charles
W3C Web Accessibility Initiative http://www.w3.org/WAI
MIT/LCS - 545 Technology sq., Cambridge MA, 02139, USA
Received on Tuesday, 20 April 1999 13:03:11 UTC