Re: Audio EQ Cookbook in the W3C Web Audio API Spec

On 3/27/13 3:10 PM, Doug Schepers wrote:
> Hi, Robert-
>
> As we discussed offlist, we currently reference your Audio EQ Cookbook 
> [1] in the Web Audio API spec. We'd like to have it in a more stable 
> form on the W3C site, either as a standalone document or as an 
> appendix in the Web Audio API spec.
>
> You shared some specific variations and thoughts on the current 
> cookbook, and feedback on the current Web Audio API spec, and it would 
> be great to have that discussion here on the public Audio WG list.
>
> [1] http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
>

yeah, i'm fine with it.  do you want this thing in .pdf form?  with the 
equations "typeset" with TeX or something?  how will you be rendering 
math equations in this thing?   if i make a .pdf, can someone convert 
that to HTML, or might you want to just point to the .pdf?

you should know that the definition of Q is fudged a bit for the peaking 
EQ.  i did this to have a consistent formula (no "if" statements) for 
boost and cut EQ and so that the cut looks exactly like the boost, but 
upside-down.  a peaking EQ is the sum of the output of a BPF and a 
wire.  if you want Q to be the regular Q of the BPF, then we can adjust 
the formula slightly, but there should be an "if" statement in there and 
use a *different* formula for a cut EQ than for the boost EQ.

i can append some nifty formula for plotting the magnitude of the 
frequency response that simplified things and work a little better 
numerically.  they are in terms of the straight biquad coefficients, so 
they do not depend on which filter type is implemented.  should maybe 
that go in there?

anyway, those are my initial ideas about the cookbook EQ filters.  
probably someone can come up with an automated Butterworth or Type 1 or 
Type 2 Tchebyshev cascade biquad filter sections and relate that to the 
cookbook





also, i have a couple of ideas for him about modularity and the class 
for audio signals.  i think that every signal should have attributes 
about sample rate and blocksize (number of samples per block) and maybe 
the nature of the sample words (fixed or float and bit width) embedded 
in the class.  also each signal instantiation should be owned by the 
instantiation of the processing module that defines what the samples 
are.  i.e. output signals are owned by the widget that they are an 
output of.  then, in connection, any input signal simply points to where 
the output is defined.  you never connect outputs together and as many 
inputs as you want can be connected to a single output.  and this output 
is created automatically when the processing widget that defines its 
samples is created.  of course, a processing widget can have as many 
inputs (where only pointers are defined) and output (where space for the 
samples created or allocated) signals as is natural for that processing 
widget.  i don't think it should be necessary for the user to have to 
explicitly create the signal instances that are the outputs of some 
widget that is created by another invocation.

if you do this right, you can have some pretty simple processing blocks 
to do sample-rate conversion for simple and constant ratios.  an 
upsampler would have maybe twice as many samples in the blocksize of its 
output than in its input.  and in the same process, you could have 
processing of signals done at twice the sampling rate, and then LPF and 
downsample later to go back to the real world.

that should probably be another thread.  Chris wrote to me about it and 
i have to read it a


sorry to be slow in responding.  i didn't have time to deal with it last 
week and then sorta forgot about it.

-- 

r b-j                  rbj@audioimagination.com

"Imagination is more important than knowledge."

Received on Tuesday, 2 April 2013 04:43:12 UTC