- From: Ian Jacobs <ij@w3.org>
- Date: Fri, 24 Sep 1999 10:09:28 -0400
- To: Jon Gunderson <jongund@staff.uiuc.edu>
- CC: Madeleine Rothberg <Madeleine_Rothberg@wgbh.org>, w3c-wai-ua@w3.org
Jon Gunderson wrote: > > Response in JRG: > > At 05:34 PM 9/23/99 -0400, Ian Jacobs wrote: > >Jon Gunderson wrote: > >> > >> 5) Issue #80 Make audio available as text. > >> > >> http://cmos-eng.rehab.uiuc.edu/ua-issues/issues-linear.html#80 > >> > >> MR: In rationale of Guideline 1, I thought an additional example on output > >> device independence. Example would meet needs of deaf users and output > device > >> independence. Take text from [3]: > >> > >> "And any output provided in audio should also be available in text since > >> most alternative output mechanisms rely on the presence of system-drawn > >> text on the > >> screen." > > > >>AG: Also add cross-reference to show sounds in techniques document. > > > >>Resolved: ok to add text to introduction > > > >Hi, > > > >I looked back at this text from [3] and I'm not sure I understand. > >Why does it belong in the section on device independence? Is this > >about user agents *generating* text from audio? Or about ensuring > >that author-supplied text is available? > > > >Or does "audio" mean "speech"? > > JRG: I believe the primary concern is that audio (sampled sounds) files > have text descriptions available of the sound file. Whether the sound is > speech or any other types of sound. The text should describe the sounds, > include a transcript of the speech (if any) or lyrics to a song in the wave > file. I believe that in general we want to have text descriptions of any > document format that is encoding audio. Then this is not a user agent requirement but an authoring requirement. - Ian -- Ian Jacobs (jacobs@w3.org) http://www.w3.org/People/Jacobs Tel/Fax: +1 212 684-1814 Cell: +1 917 450-8783
Received on Friday, 24 September 1999 10:11:36 UTC