- From: Chris Wilson <cwilso@google.com>
- Date: Wed, 23 May 2012 10:50:09 -0700
- To: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
- Cc: public-audio@w3.org
- Message-ID: <CAJK2wqU=VU53f44NB5HNpyGpZ4wu==Bs84UhdCYM_oYEhUTAvQ@mail.gmail.com>
It's an interesting question. Do you mean "core" like CSS1 Core/full (which was a disastrous waste of time), or core like CSS 2.1? On Wed, May 23, 2012 at 6:31 AM, Olivier Thereaux < olivier.thereaux@bbc.co.uk> wrote: > On 23/05/2012 00:46, Chris Wilson wrote: > > One question - "exposing the behaviour of built-in AudioNodes in a >> manner that authors of JavaScriptAudioNodes can harness" sounds like >> subclassing those nodes to me, which isn't the same thing as providing >> only lower-level libraries (like FFT) and asking developers to do the >> hook-up in JS nodes. What's the desire here? I think Robert and Jussi >> are suggesting not to have the native nodes; Colin seems to be saying >> "just make sure you can utilize the underlying bits in JSNode". Is that >> appropriate? >> > > > Should we perhaps use the same model as CSS and split the web audio > features as "Core" (AudioContext, AudioNode, AudioParam and the > JavaScriptAudioNode Interface) and a number of levels or modules defining > the higher level features? > > This could at least help us in framing the debate: I'd like to see a list > of "importance" and "implementation complexity" levels rather than the > binary all-or-nothing we tend to fall back to. > > And if it is architecturally sound, we could actually split the spec along > those lines and make it easier and faster to produce standards and > implementations. > -- > Olivier > >
Received on Wednesday, 23 May 2012 17:50:39 UTC