- From: David Poehlman <poehlman@clark.net>
- Date: Tue, 06 Jun 2000 22:48:07 -0400
- CC: Jon Gunderson <jongund@uiuc.edu>, w3c-wai-ua@w3.org
leave at p because it does not render alternative content in the same way as synthesized speech does. we need to discuss the two because that is why one is p1 and the other p2. the reason we need p1 for speech synthesis is that alternate information will be provided through this rendering medium. the history of this is that since wcag states that audio description is at some point to be delivered in synthesized speech... p2 for audio because other means of controlling it are available and the user agent might not have access to the hard ware. if a user agent supports the p1 synthesized speech, than it needs to allow for controll like most do not today. that is the other component. they are tied together for purposes of rassionalle in this way. -- Hands-On Technolog(eye)s ftp://poehlman.clark.net http://poehlman.clark.net mailto:poehlman@clark.net voice 301-949-7599 end sig.
Received on Tuesday, 6 June 2000 22:47:39 UTC