Re: Aiding early implementations of the web audio API

Citerar Chris Wilson <cwilso@google.com>:

> On Wed, May 23, 2012 at 1:00 AM, Marcus Geelnard <mage@opera.com> wrote:
>
>> Den 2012-05-22 19:55:54 skrev Chris Wilson <cwilso@google.com>:
>>
>>> I have to disagree with the definition of "trivial," then.  The only node
>>> types I think could really be considered trivial are Gain, Delay and
>>> WaveShaper - every other type is significantly non-trivial to me.
>>>
>>
>> I'd say that at least BiquadFilterNode, RealtimeAnalyserNode (given our
>> suggested simplifications), AudioChannelSplitter and AudioChannelMerger are
>> trivial too. In fact, if the spec actually specified what the nodes should
>> do, the corresponding JavaScript implementations would be quite close to
>> copy+paste versions of the spec.
>
>
> Again - we must have radically different ideas of what "trivial" means.
>  AudioChannelSplitter/Merger, perhaps - I haven't used them, so haven't
> closely examined them - but I definitely wouldn't put filters and analysers
> in that bucket.

Ok, I admit I might have misused the word "trivial" a bit. However I  
was actually thinking about our proposed simplification of the  
analyzer node (http://www.w3.org/2011/audio/track/issues/74 i.e.  
remove the FFT part - just keep a copy of the last N samples around),  
and for the filter I mainly considered the core filter operation which  
is basically a one-liner (excluding parameter setup etc which could  
admittedly amount to a few lines of code):

   y[k] = b0*x[k] + b1*x[k-1] + b2*x[k-2] - a1*y[k-1] - a2*y[k-2];


>>> And even then, when you layer on the complexity involved with handling
>>> AudioParams
>>> (for the gain on Gain and the delayTime on Delay), and the interpolation
>>> between curve points on WaveShaper, I'm not convinced they're actually
>>> trivial.
>>>
>>
>> If handling AudioParams is actually a complex thing, I think we should
>> seriously consider simplifying the corresponding requirements or dropping
>> it altogether.
>
>
> Again, perhaps we disagree on "complex".  I think it is sufficiently
> complex where I'd rather have the underlying platform support it, rather
> than have to implement the timing and interpolation myself.  Naively
> handling them probably IS pretty easy.
>
>
>> The easiest interface would be just be to have an output device stream.
>>>  However, I think having a basic audio toolbox in the form of node types
>>> will cause an explosion of audio applications -
>>>
>>
>> ...which is why there are JS libs. The Web Audio API is already too
>> complex to use for most Web developers, so there are already libs/wrappers
>> available for making it easier to build basic audio applications.
>>
>
> I'm not sure what you're trying to say.

Ok. I'll try to explain my reasoning a bit further:

The Web Audio API has a certain level of complexity (don't get me  
wrong - it's by necessity). Quoting a blog post [1]: "the API is  
extremely low-level and cumbersome to use". The blogger (Matt Hackett)  
then went on and provided a simple wrapper library (Audia) for simple  
audio playback. This reminds me a lot of the situation with WebGL,  
where helper libraries such as GLGE [2] and three.js [3] came along  
quite quickly to provide higher level functionality on top of the very  
low-level WebGL API.

So, I expect JS audio libraries to emerge (some already have, but once  
we have cross-browser support I think we will see much more serious  
work happen). Whether these libraries use native nodes or provide  
custom JS implementations will be completely equivalent to the library  
user.

> There's too much complexity, it's already having to be wrapped for
> real-world developers, so let's push more complexity on them?

But who are we pushing this added complexity on? The ones writing the  
wrappers are usually prepared to spend quite some time to get things  
working properly and making life easier for "real-world developers".  
In that context writing some JS code for implementing filters etc  
would not be THAT much work. There are already JS audio processing  
libraries available (e.g. dsp.js [4]), and I'm quite confident that  
getting the core audio nodes implemented in JS would be a surmountable  
task (at least it should be simpler than putting it all in a spec and  
getting it implemented natively in all browsers).

/Marcus


[1]  
http://www.lostdecadegames.com/audia-is-a-library-for-simplifying-the-web-audio-api/
[2] http://www.glge.org/
[3] http://mrdoob.github.com/three.js/
[4] https://github.com/corbanbrook/dsp.js/

Received on Wednesday, 23 May 2012 22:11:30 UTC