Comments on the Web Audio API proposal

Hi,

As I said in the last teleconf, I am writing a few comments of the
current state of the Web Audio API proposal.  More specifically about
the nodes that I find lacking or that could be merged.

Thoughts about ConvolverNode and BiquadFilterNode
------------------------
First of all I like the fact that there is no more a ReverbNode and
instead we have a ConvolverNode.  If I understood correctly the
ConvolverNode is basically an FIR (finite impulse response) filter,
which is probably implemented internally by frequency domain
multiplication when the impulse responses are long.
I also think we should have a the ability of making infinite impulse
responses.  I know that this is already allowed by having the
BiquadFilterNode.  However this only allows us to have 3 b and 2 a
coefficients.

As I see it is that any filter (whether it is a FIR or a Biquad IIR)
can be defined as a IIR filter, and therefore the API would be much
more simple if we had only one node for all filters.  A FilterNode
that under the hood can have specialized implementations for the FIR
case, the long impulse response FIR case, the biquad case and the
general case.  For commodity we can have special presets or functions
in the API to generate the a and b coefficients for certain
interesting filters (certain reverbs, lowpass, highpass,...).


Thoughts about the RealtimeAnalyzer
------------------------
As I have expressed earlier I think this is quite a vague node that is
very specific to visualization.  I think a much better node (with a
more determined behavior) would be an FFTNode.  This node would simply
perform an FFT (would be also important to allow it to perform the
IFFT).  And give access to the magnitude and phase (or real and
imaginary).  This node would be extremely useful not only for
visualization, but for analysis, synthesis and frequency domain
effects.


Thoughts about the general API
------------------------
One last thing I am worried about is the fact that it should be
important to allow to use FFT and filter nodes on other things other
than an audio stream (e.g. on a simple Float32Array that we may have
in our hands).  The motivation is that in many cases one may not want
to perform the FFT directly on audio signals.  There are many examples
of this:
 - in beat tracking we can use the spectrum analysis (using the
FFTNode) of an onset detection function
 - in pitch estimation we may perform the autocorrelation (using the
FilterNode) of the spectrum

This means that I should be able to simply create and FFTNode or a
FilterNode and ask it to compute on a given Float32Array that I may
pass to it, and this should be easy (maybe without the need of a
context nor an AudioDestinationNode).

Any thoughts?  We can also discuss this in more detail in today's
teleconf if you wish, sorry for being last minute on this.


-- 
ricard
http://twitter.com/ricardmp
http://www.ricardmarxer.com
http://www.caligraft.com

Received on Monday, 4 October 2010 14:50:25 UTC