W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Core + Levels/Modules ? (Was: Aiding early implementations of the web audio API)

From: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
Date: Wed, 23 May 2012 14:31:18 +0100
Message-ID: <4FBCE6A6.6050208@bbc.co.uk>
To: public-audio@w3.org
On 23/05/2012 00:46, Chris Wilson wrote:

> One question - "exposing the behaviour of built-in AudioNodes in a
> manner that authors of JavaScriptAudioNodes can harness" sounds like
> subclassing those nodes to me, which isn't the same thing as providing
> only lower-level libraries (like FFT) and asking developers to do the
> hook-up in JS nodes.  What's the desire here?  I think Robert and Jussi
> are suggesting not to have the native nodes; Colin seems to be saying
> "just make sure you can utilize the underlying bits in JSNode".  Is that
> appropriate?


Should we perhaps use the same model as CSS and split the web audio 
features as "Core" (AudioContext, AudioNode, AudioParam and the 
JavaScriptAudioNode Interface) and a number of levels or modules 
defining the higher level features?

This could at least help us in framing the debate: I'd like to see a 
list of "importance" and "implementation complexity" levels rather than 
the binary all-or-nothing we tend to fall back to.

And if it is architecturally sound, we could actually split the spec 
along those lines and make it easier and faster to produce standards and 
implementations.
-- 
Olivier



Received on Wednesday, 23 May 2012 13:32:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 23 May 2012 13:32:12 GMT