Re: Revised Checkpoints: WCAG(1.4/1.3) and UAAG(2.5)

Hello,

I've finally had some time to go through this in detail.  This was quite a 
thorough analysis Eric!

My thinking on these issues has changed a bit after all of these 
threads.  My responses are included throughout the e-mail, including a few 
counter proposals.  EH:: = prefixes Eric's text, WC:: = mine.

EH::
>7. WAI should develop one or more specification documents (W3C Notes or 
>Recommendations) for:
>
>a. auditory descriptions, including (1) synthesized-speech auditory 
>descriptions and (2) prerecorded auditory descriptions (including 
>"prerecorded auditory description tracks" and "prerecorded auditory 
>description supplement tracks", the latter being explained later in this 
>document)
>b. captions
>c. synchronization of collated text transcripts
>d. synchronization of audio clips with their text transcripts
>
>I see document "c" as possibly encompassing "a" and "b". Even better 
>perhaps, they could all four items could be addressed together. (I am not 
>sure whether all these are within the charter of SMIL.)
>
>I would not expect the task of retrofitting existing "prerecorded auditory 
>descriptions tracks" to the specifications to be difficult. Content and 
>data in existing captions could, I expect, be almost entirely reused in 
>new captions conforming to the captions specification.
WC::
This sounds like something that would be in the Techniques document, 
particularly documented in a SMIL-specific section/chapter.

EH::
>====
>2. Avoid the use of "synchronized alternative equivalents" in WCAG.
>
>The term seems redundant.
WC:: ok

EH::
>3. Avoid the use of "synchronized equivalents" in both WCAG and UAAG.
>
>This is important because often the components to that are presented 
>together are not equivalent to each other. The term seems misleading.
>====
>
>4. Use the term "synchronized alternatives".
>
>Implies the idea that it is alternative content, which is essentially 
>true. This is my preferred term, I think.
>====
WC::
In some cases it is an alternative, but in others it is an equivalent (for 
example captions of speech, alt-text of bitmap image).  Also, I thought 
that we had decided to replace "alternative" with "equivalent" if the 
"alternative" was providing the _functional_ equivalent.  I suggest we 
stick with the term "synchronized equivalent."

EH::
>5. Use "visual track" and "auditory track"
>
>Use "visual track" and "auditory track" rather than video track and audio 
>track when referring to multimedia presentations.

WC:: ok

EH::
>====
>
>6. Avoid the term "continuous alternatives".
>
>Not sure that this is a great term. It is probably best just to name the 
>specific things.

WC:: I did not see this in WCAG (guidelines nor Techiques), this must be a 
UAGL issue?

EH::
>====
>7. Add synchronization to the glossary.
>
>"Synchronization, Synchronize, Synchronization Data, Synchronized 
>Alternatives"
>
>"Synchronization refers to sensible time-coordination of two or more 
>presentation components, particularly where at least one of the components 
>is a multimedia presentation (e.g., movie or animation) or _audio clip_ or 
>a portion of the presentation or audio clip."
>
>"For Web content developers, the requirement to synchronize means to 
>provide the data that will permit sensible time-coordinated presentation 
>by a user agent. For example, Web content developer can ensure that the 
>segments of caption text are neither too long nor too short and that they 
>mapped to segments of the visual track that are appropriate in length.

WC:: I can see adding this part to WCAG glossary.

EH::

>"The idea of "sensible time-coordination" of components centers of the 
>idea of simultaneity of presentation, but also encompasses strategies for 
>handling deviations from simultaneity resulting from a variety of causes.
>
>Consider how certain deviations in simultaneity might be handled in 
>auditory descriptions. Auditory descriptions are considered synchronized, 
>since each segment of description audio is presented at the same time as a 
>segment of the auditory track, e.g., a natural pause in the spoken 
>dialogue. Yet a deviation can arise when a segment of the auditory 
>description is lengthy enough that it cannot be entirely spoken within the 
>natural pause. In this case there must be a strategy for dealing with the 
>mismatch between the description and the pause in the auditory track. The 
>two major types of auditory descriptions lend themselves to different 
>strategies. Prerecorded auditory descriptions usually deal with such 
>mismatches by spreading the lengthy auditory description over more than 
>one natural pause. When expertly done, this strategy does not ordinarily 
>weaken the effectiveness of the overall presentation. On the other hand, a 
>synthesized-speech auditory description lends itself to ot!
>!
>!
>her strate
>    gies. Since synthesize
>
>Let us briefly consider how deviations might be handled for captions.
>
>Captions consist of a text equivalent of the auditory track that is 
>synchronized with the visual track. Captions are essential for individuals 
>who require an alternative way of accessing the meaning of audio, such as 
>individuals who are deaf. Typically, a segment of the caption text appears 
>visually near the video for several second while the person reads the 
>text. As the visual track continues, a new segment of the caption text is 
>presented.
>
>One problem arises if the caption text is longer than can fit in the 
>display space. This can be particularly difficult if due to a visual 
>disability, the font size has been enlarged, thus reducing the amount of 
>caption text that can be presented. The user agent must respond sensibly 
>to such problems, such as by ensuring that the user has the opportunity to 
>navigate (e.g., scroll down or page down) through the caption segment 
>before proceeding with the visual presentation and presenting the next 
>segment. Some means must be provided to allow the user to signal that the 
>presentation may resume.
>
>=====
WC::
some of this seems appropriate for the Techniques document, other pieces 
are obviously intended for the User Agent Guidelines glossary. They could 
be reworked for discussion in WCAG Techniques, or could be linked to from 
WCAG Techniques.

EH::
>PART 3 -- CHANGES TO WCAG DOCUMENT
>
>1. Add checkpoints 1.3 into checkpoint 1.4 and then break 1.4 into several 
>checkpoints.

WC::
I am deleting much of your text and commenting on certain pieces of it.  In 
general, I feel that much of what is being incorporated into checkpoint 
text is more appropriate in the Techniques document.

I propose 1 new checkpoint, and reworking 1.3 and 1.4 to cover the 6 that 
Eric proposed.  1.3 is discussed here, 1.4 and 1.x are discussed later.

<checkpoint-proposal>
1.3 Provide a synchronized auditory description for each multimedia 
presentation (e.g., movie or animation).  [Priority 1 for important 
information, Priority 2 otherwise.]
</checkpoint-proposal>

The techniques for satisfying this checkpoint will be discussed in the 
Techniques document:
1. synchronizing a pre-recorded human auditory track.
2. synchronizing a recorded speech synthesized auditory track.
3.  synchronizing a text file on the fly.

I believe your proposed checkpoints 1.4.A and 1.4.B are techniques for 
checkpoint 1.3.

EH::
>====
>New WCAG checkpoint 1.4.C (4 December 1999):
>"1.4.C For each multimedia presentation (e.g., movie or animation), 
>provide captions and a collated text transcript. [Priority 1]"
>
>Rationale: These two pieces are essential (captions for individuals who 
>are deaf; collated text transcript for individuals who are deaf-blind). We 
>know that captions are needed and we have technologies that can handle it. 
>A collated text transcript is relatively straightforward to supply.

WC::
this is a rewording of 1.4.  To make it jive with my proposed rewording of 
1.3 I propose:
<checkpoint-proposal>
1.4 Provide captions and a collated text transcript for each multimedia 
presentation (e.g., movie or animation).  [Priority 1]
</checkpoint-proposal>

EH::

>====
>New WCAG checkpoint 1.4.D (4 December 1999) (id: WC-ACLIP-TT):
>"1.4.D  For each audio clip, provide a text transcript. [Priority 1]"
>
>Rationale: A text transcript is _essential_ for disability access to audio 
>clips, whereas a text transcript is not essential for access to auditory 
>tracks of multimedia presentations (for example, the collated text 
>transcript and caption text includes the information found in the text 
>transcript of the auditory track).
>====

WC::
this is covered in the current checkpoint 1.1
<current-checkpoint>
1.1 Provide a text equivalent for every non-text element (e.g., via "alt", 
"longdesc", or in element content). This includes: images, graphical 
representations of text (including symbols), image map regions, animations 
(e.g., animated GIFs), applets and programmatic objects, ascii art, frames, 
scripts, images used as list bullets, spacers, graphical buttons, sounds 
(played with or without user interaction), stand-alone audio files, audio 
tracks of video, and video. [Priority 1]
</current-checkpoint>

EH::

>New WCAG checkpoint 1.4.E (4 December 1999) (id: WC-ACLIP-SYNC-TT):
>"1.4.E  Synchronize each audio clip with its text transcript. [Priority 
>1]" {I prefer the brevity of this version.}
>{or}
>"1.4.E  For each audio clip, provide data that will allow user agents to 
>synchronize the audio clip with the text transcript. [Priority 1]"
>"Note: This checkpoint becomes effective one year after the release of a 
>W3C recommendation addressing the synchronization of audio clips with 
>their text transcripts."
WC::
I agree with discussion on the list that "audio" should be included in 
"multimedia."  However, there was consensus that this ought to be a 
Priority 2.  Therefore, I propose:
<checkpoint-proposal>
1.x Provide captions for each stand-alone audio clip or stream, as 
appropriate. [Priority 2]
Note. For short audio clips, providing a text equivalent as discussed in 
checkpoint 1.1 is all that is needed.  This checkpoint is intended to cover 
audio clips of speech such as news broadcasts or a lyrical performance.
</checkpoint-proposal>
the "as appropriate" is supposed to signify that it is not necessary to 
caption all audio clips.  for example, we discussed back in May that we do 
not need to caption an instrumental performance, however it is appropriate 
to caption a musical performance with singing.

EH::

>New WCAG checkpoint 1.4.F
>"For each multimedia presentation for which a synthesized-speech auditory 
>description of _important_ information is likely to be inaccessible, 
>provide a prerecorded auditory description _important_ information."
>"[Priority 3]"
>{or}
>"For each multimedia presentation, provide a prerecorded auditory 
>description."
>"[Priority 3]"
>{or}
>"For each multimedia presentation, provide a prerecorded auditory 
>description for _important_ information."
>"[Priority 3]"
WC::  If synthesizing auditory descriptions is a technique for 1.3, then 
this proposed checkpoint is not needed.

thoughts?
--wendy
--
wendy a chisholm
world wide web consortium
web accessibility initiative
madison, wi usa
tel: +1 608 663 6346
/--

Received on Thursday, 16 December 1999 11:32:39 UTC