W3C home > Mailing lists > Public > www-style@w3.org > October 2011

Re: css3-speech, UA sound mixing (was Re: TPAC F2F and Spec Proposals)

From: Alistair MacDonald <al@signedon.com>
Date: Tue, 18 Oct 2011 18:45:15 -0400
Message-ID: <CAJX8r2=D5mRx3eYYCO6wQ70jswXQkRk5Z906N3ivHwXSqQfZ4w@mail.gmail.com>
To: Daniel Weck <daniel.weck@gmail.com>
Cc: www-style list <www-style@w3.org>, public-audio@w3.org, public-xg-htmlspeech@w3.org, Chris Rogers <crogers@google.com>, "robert@ocallahan.org" <rocallahan@gmail.com>, Stefan Håkansson LK <stefan.lk.hakansson@ericsson.com>

This is really interesting, will read over the CSS3 Speech module tonight.
(Was not aware of it until now.)


On Tue, Oct 18, 2011 at 4:12 PM, Daniel Weck <daniel.weck@gmail.com> wrote:

> On 18 Oct 2011, at 19:52, Alistair MacDonald wrote:
> > I think we need a more complete Browser Audio Framework, that can be
> broken down into the following components:
> >
> > 1) A browser UI and architecture for controlling audio -- at a tab and
> device level -- it would not be a pressing matter standardize this
> functionality and could be done independently by each browser vendor.
> > 2) A "Web Audio Data API" with high-resolution timing, 3D spatialization
> of sources, with standardized effects and algorithms for music and games
> that accepts inputs from other APIs.
> > 3) A common "Sound Mixer API" for the window which allowed for panning,
> mixing, muting, creating JavaScript Sinks and Worker-Threads. RTC, Web Audio
> Data and HTML Media elements would play back though the Sound Mixer API.
> >
> > I have created a diagram to visualize this concept here:
> > http://f1lt3r.com/w3caudio/Browser%20Audio%20Routing.jpg
> >
> > With this in mind I think the most pressing concern for right now is an
> Sound Mixer API. Then a Web Audio Data API. And finally (who knows how far
> out this would be) an overhaul of the browsers internal audio architecture
> adding UI features to the UA.
> (added CSS Working Group + HTML Speech Incubator Group to this email
> thread)
> Thank you for initiating this discussion (the overview diagram is helpful,
> by the way). However, I should point out that the CSS Speech Module takes
> part in the web-browser audio ecosystem as well:
> http://www.w3.org/TR/css3-speech
> This "aural" presentation layer consists of audio output generated
> primarily from the underlying speech synthesizer (TTS engine), but also from
> the browser's regular sound interface (optional audio cues before and/or
> after spoken words).
> Note about volume levels: the user-agent stylesheet specifies default
> "settings", content authors can alter speech/cue sound levels as they wish,
> and user stylesheets can override authored intent (as per the traditional
> CSS "cascade" mechanism and "! important" rules).
> Note about audio spatialization: a future version of the CSS Speech Module
> will support 3D aural positioning (in current Level 3 of the specification,
> only stereo panning is supported).
> The mixing architecture proposed by Alistair would ultimately benefit
> accessibility, because it would provide end-users with fine-grained control
> mechanisms over the (potentially concurrent) streams of aural information,
> all from a unified and coherent interface. I look forward to hearing more
> about this.
> Kind regards, Daniel
Received on Wednesday, 19 October 2011 02:44:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:08:06 UTC