- From: Wendy A Chisholm <wendy@w3.org>
- Date: Mon, 29 Nov 1999 13:05:14 -0500
- To: Marja-Riitta Koivunen <marja@w3.org>, w3c-wai-gl@w3.org
- Cc: w3c-wai-ua@w3.org
> >OK. So does the text equivalent in WCAG 1.3 refer to the collated text? no, 1.3 refers to the auditory recording of the video description. >Reading that aload makes sense because then you don't need to worry about >timing. The user can just listen to it separately from the video. And I >guess this is something that the screenreaders can already do >automatically. But I guess we wanted every browser to do that in 1.3 UNTIL >UA clause? no, we want the user agent to synchronize the synthesis of the speech with the rest of the auditory track. the reason is that not everyone has the tools or the time to record additional auditory tracks. we figured if the user agent could synthesize and synchronize it would save the author some time. >It did not occur to me earlier from WCAG that the author needs to always >provide the collated text. Should that be said more explicitly if that is >demanded? the collated text transcript is not ALWAYS required. It depends on the type of clip and the information being provided in the clip. more information is available at: http://www.w3.org/TR/WAI-WEBCONTENT-TECHS/#video-information >Captions do have timecodes. Video (or audio) description has too. But >having textual transcript of audio description with proper timecodes means >the audio description already exists. So I don't know why to generate it >from text, to save space? as i said earlier, to relieve the author of some work. >OK, so collated transcripts are needed for video presentations as P1? In >this case there is no need for synchronization with video and audio(unless >if you want to synchronize the info with a group of people where some can >see or hear). There are 2 priority 1 checkpoints. here is text from the issues list that was generated during proposed recommendation: <BLOCKQUOTE> Final resolution 1.3 Until most user agents can automatically read aloud the text equivalent of the visual track, provide an auditory description of the important information in the visual track of multimedia presentations. [Priority 1] (Note: Per 1.4 this must be synchronized with the audio track) (See 1.1 regarding text equivalents for visual information) **[Note: If there is no important visual information (e.g. a talking head) then a visual description is not required.] --- goes into techniques doc Rationale: animation is covered if part of a presentation. This will be made clear in the techniques doc. Other animations that are part of programmatic interactions are covered elsewhere. Minority position: Eric Phill: Full compiled text would provide Basic level access to the information. (p1) Providing Auditory description would provide better access or p2 level access. </BLOCKQUOTE> this is available from: http://www.w3.org/WAI/GL/wai-pr-issues#animation-movies (issue 20 issue 21 is also related: http://www.w3.org/WAI/GL/wai-pr-issues#synch-a-v specific discussions that occurred during Proposed Rec: http://lists.w3.org/Archives/Public/w3c-wai-gl/1999AprJun/0043.html - Jason summarizes Gregg's, Eric's and his own position. http://lists.w3.org/Archives/Public/w3c-wai-gl/1999AprJun/0050.html - Charles clarifies that we need a general checkpoint but that who provides the audio (a synthesizer or a recording of a human) is a technique and discusses some future scenarios. --w <> wendy a chisholm (wac) world wide web consortium (w3c) web accessibility initiative (wai) madison, wisconsin (madcity, wi) united states of america (usa) tel: +1 608 663 6346 </>
Received on Monday, 29 November 1999 12:58:42 UTC