W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

AnalyserNode Question

From: Nick Thompson <ncthom91@gmail.com>
Date: Sun, 8 Sep 2013 09:33:20 -0700
Message-ID: <CAOXEKCNzqLKiAetpE7ank2c0KZcE26ECfiEsnJpLfpt7Bq08+g@mail.gmail.com>
To: public-audio@w3.org
Hi all,

While working through a little web audio experiment yesterday, I discovered
the small detail in the AnalyserNode implementation that the results passed
to the array parameter in the get{Float,Byte}FrequencyData methods are
simply magnitudes:

I'm writing to ask why that is the case? It seems like it would be much
more flexible to support an API wherein I pass two array parameters, one
for the sin coefficients and one for the cos coefficients. Then, if I want
to graph the results or something, I'll have to run my own iteration of the
arrays and compute my own magnitudes, but at least I had the option.

Particularly, the example I was hoping to accomplish was to feed a
BufferSourceNode into an a AnalyserNode, then in the AudioProcessingEvent
of a ScriptProcessor, read the data from the analyser node, push the sin
and cos coefficients into a WaveTable, and set the wavetable into an
oscillator to reconstruct the sound of the buffer as an oscillator.
Unfortunately there's no way to really do that as I can't guess the sin/cos
coefficients from the magnitude.

So, primarily I'm just curious as to the reasoning behind this detail in
the implementation, though, if any of you have suggestions toward
completing my example, I'd love to hear them! Bear in mind that I have no
formal DSP knowledge, and only a little informal DSP knowledge. Also, if
this is not the place for questions like these, please point me to the
right place (I'm aware of the Web Audio Dev group, but I'm not sure that's
the place to ask this "why" kind of question).


Nicholas Thompson
B.S. Computer Science
Cornell University, 2013
Received on Sunday, 8 September 2013 16:33:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:23 UTC