W3C home > Mailing lists > Public > wai-xtech@w3.org > December 2007

[ACSS] Meta-Question: responsibility for firing aural cues (redux)

From: Gregory J. Rosmaita <oedipus@hicom.net>
Date: Tue, 11 Dec 2007 03:09:55 +0000
To: wai-xtech@w3.org, wai-liaison@w3.org
Message-Id: <20071211030911.M8330@hicom.net>

no one nor any working group has directly addressed or is able to answer 
the question:

"In the absence of a screen-reader or screen-magnification program or 
any other application that tracks a UA cursor and provides flow, 
whose responsibility is it to fire aural cues?"

(last attempt to rekindle this discussion:
<http://lists.w3.org/Archives/Public/wai-xtech/2007May/0000.html>)

an extremely important related question is:

"Shouldn't aural styling be extended to control embedded or designated 
onAction/onMouseOver/onFocus/onHover/onWhatEver audio events so as to 
ensure user control over what are currently usually javascripted 
hacks?"

Why are these issues important to consider?

In the use scenario of a screen-magnifier with an aural overlay of 
earcons, the screen-magnifier may trigger or request the playing of a 
sound-clip, but what application is supposed to provide that aural 
feedback?  The User Agent?  The underlying operating system?

It should be a backplane operation, but someone or something must take 
responsibility for the actual aural rendering of the earcon -- we need 
to discuss where to address this point in terms of conformance criteria

some (mostly recycled) thoughts on triggering and rendering:
"who plays the target of cue, play-during and other aural icons?"

i firmely believe this needs to be a backplane event, which means that 
the 
triggering mechanism needs access to the device's default sound renderer 
(i.e. that which produces system sounds and other sounds under-the-hood, 
without opening or depending on a third party audio file player)

play-during play-before are event driven, as are sounds associated with 
MouseOver events, such as the :hover; pseudo-element.  The problem with 
Aural CSS alone is that it paints to an event responsive timeline, and 
therefore it needs to get permission from divergent operating systems 
and divergent user agents, to layer sounds, such as in the example of an 
audio file set to :play-during while speech is simultaneously being 
output)

In the use scenario of a screen-magnifier with an aural overlay of 
earcons, but no synthesized speech, the screen-magnifier may trigger or 
request the playing of a sound-clip, but what application is supposed to 
provide that aural feedback?  The user agent?  The underlying operating 
system?  It should be a backplane operation, but someone or something 
must 
take responsibility for the actual aural rendering of the earcon.

And that means that there is the need for a standardized triggering 
mechanism, but where to address this need?  is this as much a WebAPI issue
as much as a CSS issue?

gregory.
-------------------------------------------------------
lex parsimoniae:
  * entia non sunt multiplicanda praeter necessitatem.
-------------------------------------------------------
the law of succinctness:
  * entities should not be multiplied beyond necessity.
-------------------------------------------------------
Gregory J. Rosmaita, oedipus@hicom.net
         Camera Obscura: http://www.hicom.net/~oedipus/
-------------------------------------------------------
Received on Tuesday, 11 December 2007 03:10:07 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 13:15:44 GMT