W3C home > Mailing lists > Public > www-style@w3.org > August 2004

Re: Audio

From: Andrew Thompson <lordpixel@mac.com>
Date: Mon, 9 Aug 2004 22:03:28 -0400
Message-Id: <78A32BE5-EA71-11D8-9D1D-000A27D7D9DC@mac.com>
Cc: www-style <www-style@w3.org>
To: Dave Raggett <dsr@w3.org>


On Aug 8, 2004, at 8:02 AM, Dave Raggett wrote:

> For instance, the following would in principle play the same sound
> at the same time for all paragraphs and likewise for all h1
> elements.  Probably not what the author intended.
>
>    h1 { sound: url(wind.wav) }
>    p { sound: url(waves.wav) }


Of course, this is assuming a visual presentation model where several 
paragraphs and or are visible on the screen any one time.

However, if one is presenting a page through speech, there is always 
the concept of the "current" element: the one presently being spoken. 
Effectively there's a playback head or speech cursor progressing 
through the document. In such a model the above declaration would make 
more sense, since there's always a current element which would indicate 
which sound to use. At least the problem of compositing sounds is 
reduced to a containment hierarchy, rather than being a mix of 
everything in the viewport.

Not that any one model is right or wrong, but there are many different 
use cases to consider.
I suppose it goes without saying that SMIL 
(http://www.w3.org/AudioVideo/) is a reference point for any work in 
this area?

AndyT (lordpixel - the cat who walks through walls)
A little bigger on the inside

         (see you later space cowboy ...)
Received on Tuesday, 10 August 2004 02:03:30 UTC

This archive was generated by hypermail 2.4.0 : Monday, 23 January 2023 02:13:07 UTC