Multimedia comments issue #138

Hi,
Here are my comments on issue 138
http://cmos-eng.rehab.uiuc.edu/ua-issues/issues-table.html#138

I do not have any strong opinions on the use of the terms
"synchronized alternative equivalents",  "synchronized 
equivalents", "synchronized alternatives", "continuous 
equivalents."
Some alternatives are synchronized and some are not, but if we 
make clear which are which perhaps we can use the same terms for 
both. I don't see the difference between "alternative" and 
"equivalent," so I am happy to let the editorial types make the 
decision on this part of the issue.

I do have comments on other parts of Eric's proposal. I agree with 
Wendy's comments to the GL list archived at:
http://lists.w3.org/Archives/Public/w3c-wai-
gl/1999OctDec/0218.html
Many of Eric's proposals for the WCAG involved splitting a single 
checkpoint into several checkpoints. Wendy commented that she felt 
the material could be incorporated into techniques instead. I 
think we can take a similar approach for the UAAG, and that much 
of Eric's analysis would make excellent techniques information. 
Specifically:

EH::
I think that there are a huge number of ways in which text, video, 
audio and their equivalents _could be combined_ to make multimedia 
presentations and audio clips accessible to people with 
disabilities, but only a much smaller number of ways are really 
essential or really valuable and it is up to WAI to more 
specifically identify and describe that smaller number of 
combinations.

MR::
I agree that certain combinations of multimedia tracks are more 
likely than others to be useful, but I think that existing UA 
checkpoints say that all tracks must be able to be turned on and 
off. This gives the user complete control over which tracks are 
rendered, making it unnecessary for the UA to understand the 
combinations.  This would include 2.1 "Ensure that the user has 
access to all content, including alternative equivalents for 
content. [Priority 1] " and also 2.5 "If more than one alternative 
equivalent is available for content, allow the user to choose from 
among the alternatives. This includes the choice of viewing no 
alternatives. [Priority 1]" as well as checkpoints in Guideline 3 
that specify that users be able to turn on and off any track since 
it might cause distraction or otherwise interfere with use. I 
think Eric's excellent description of the uses of different 
combinations of tracks would be helpful techniques material so 
that UA developers see the reason to implement the checkpoints 
listed here.

Eric's analysis includes distinguishing between text transcripts, 
text of audio description, and collated text transcripts (which 
are a combination of the other two).  The use of a collated text 
transcript is a neat idea, but it is not yet a part of any 
specification, so I don't think we can shape our guidelines around 
it. Similarly, both the WCAG and the UAAG would like to support 
the idea of synthesized speech for rendering of audio descriptions 
from a text file, but we do not have a technology that can do 
that. Another possible synchronized equivalent that does not have 
an implementation yet is a sign language track. Though I've argued 
in the past that sign language is an important equivalent (and I 
still feel that it is) I acknowledge that unless SMIL or some 
other specification has a way for authors to indicate that a given 
track is intended as an equivalent track, we can't require UAs to 
allow that track to be turned on and off in the same way that we 
can require for captions and audio descriptions (defined as of the 
latest public draft of SMIL-Boston).

Overall, what I'm trying to say is, we need to craft some forward 
looking language, probably in the techniques, to promote new 
ideas. This would include synthesized audio descriptions, 
combining captions and audio descriptions into a collated text 
transcript (which can then replace both tracks) and a way to 
indicate that a video track is intended as an alternative 
equivalent, for sign language use. But until then, I think we are 
best off with the current umbrella checkpoints referring to 
various kinds of media, with techniques showing the currently 
recognized ways of implementing them as well as future ideas for 
improved features.

I think this approach matches the spirit of our changes in the 
December 7 telecon, where we resolved to merge the checkpoints in 
GL 4 for audio, video, and animation into a single set of 
checkpoints. Whenever possible, I think we are better off with 
fewer checkpoints as long as they are clear. The use of examples 
and Notes helps with that clarity. I don't think we need a series 
of checkpoints on each different aspect of alternative tracks.

Looking forward to discussing this with the working group on Wednesday.

-Madeleine

Received on Tuesday, 21 December 1999 15:17:42 UTC