- From: David Pawson <DPawson@rnib.org.uk>
- Date: Wed, 27 Aug 1997 08:54:07 +0100
- To: w3c-wai-wg@w3.org
> -----Original Message----- > From: Al Gilman [SMTP:asgilman@access.digex.net] > Sent: Tuesday, August 26, 1997 2:34 PM > To: w3c-wai-wg@w3.org > Subject: Re: Audio Access > > to follow up on what Geoff Freed said: [David Pawson] ..snip > > > [referring to...] > > > For those using synthetic speech to access text, there are > > > potential problems when the sound effect, and/or the spoken text > > > of a description of the sound effect, collides (in the audio > > > delivered to the user) with the presentation of spoken text > > > extracted from the page. > > [David Pawson] Are we approaching a 'channelling' effect? The impact of personal choice would leave a user instructing the browser to selectively action visual and auditory output, leaving [for example] presented material to be channelled primarily to an audio device ( for the visually impaired reader) which would eliminate any audio [switched off] from the page. Similarly, auto-generated sounds from the web page might be switched off and replaced with visual [alts] output for the user who has no use for auditory output. Channels would need to be defined for Primary output visual Primary output audio [One of these may be defined as my preferred prime channel] Secondary output visual Secondary output audio if we wanted to get exotic, the presence of a secondary channel output could lead to an event to which I may wish to respond, by halting the main channel output to listen, look at the secondary channel? Just a thought. DaveP
Received on Wednesday, 27 August 1997 03:52:18 UTC