- From: Olivier Thereaux <olivier.thereaux@bbc.co.uk>
- Date: Wed, 23 May 2012 14:31:18 +0100
- To: public-audio@w3.org
- Message-ID: <4FBCE6A6.6050208@bbc.co.uk>
On 23/05/2012 00:46, Chris Wilson wrote: > One question - "exposing the behaviour of built-in AudioNodes in a > manner that authors of JavaScriptAudioNodes can harness" sounds like > subclassing those nodes to me, which isn't the same thing as providing > only lower-level libraries (like FFT) and asking developers to do the > hook-up in JS nodes. What's the desire here? I think Robert and Jussi > are suggesting not to have the native nodes; Colin seems to be saying > "just make sure you can utilize the underlying bits in JSNode". Is that > appropriate? Should we perhaps use the same model as CSS and split the web audio features as "Core" (AudioContext, AudioNode, AudioParam and the JavaScriptAudioNode Interface) and a number of levels or modules defining the higher level features? This could at least help us in framing the debate: I'd like to see a list of "importance" and "implementation complexity" levels rather than the binary all-or-nothing we tend to fall back to. And if it is architecturally sound, we could actually split the spec along those lines and make it easier and faster to produce standards and implementations. -- Olivier
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Wednesday, 23 May 2012 13:32:09 UTC