W3C home > Mailing lists > Public > w3c-wai-ua@w3.org > April to June 2000

Re: Resolution of Issue PR#285 on audio volume control

From: David Poehlman <poehlman@clark.net>
Date: Tue, 06 Jun 2000 22:48:07 -0400
Message-ID: <393DB7E7.1DAA33AF@clark.net>
CC: Jon Gunderson <jongund@uiuc.edu>, w3c-wai-ua@w3.org
leave at p because it does not render alternative content in the same way as
synthesized speech does.  we need to discuss the two because that is why one
is p1 and the other p2.  the reason we need p1 for speech synthesis is that
alternate information will be provided through this rendering medium.  the
history of this is that since wcag states that audio description is at some
point to be delivered in synthesized speech...
p2 for audio because other means of controlling it are available and the
user agent might not have access to the hard ware.  if a user agent supports
the p1 synthesized speech, than it needs to allow for controll like most do
not today.  that is the other component.  they are tied together for
purposes of rassionalle in this way.
Hands-On Technolog(eye)s
voice 301-949-7599
end sig.
Received on Tuesday, 6 June 2000 22:47:39 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:49:26 UTC