W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2016

Re: two questions about FFT/IFFT

From: Raymond Toy <rtoy@google.com>
Date: Tue, 12 Apr 2016 16:32:49 -0700
Message-ID: <CAE3TgXGWOdeo9k7TsM+p8HLaU5iNHoKKrBmsdEzk9uRXJwtY-g@mail.gmail.com>
To: Matt Diamond <mdiamond@jhu.edu>
Cc: "public-audio@w3.org" <public-audio@w3.org>
On Tue, Apr 12, 2016 at 2:24 PM, Matt Diamond <mdiamond@jhu.edu> wrote:

> 1) Is it possible to do a non-realtime spectral analysis of a BufferSource
> using an AnalyserNode with an OfflineAudioContext?
>

Yes, but you probably need to use suspend/resume to get correct timing.

>
> 2) Is it possible for me to use a PeriodicWave to convert an array of
> frequency data from the Analyser Node back into a time-domain waveform? It
> seems like this isn't possible.
>

Right.  The analyser loses the phase information so you can't get the
original back.

>
> Basically I'd like to come up with a web-based version of the spectral
> averaging technique that R Luke DuBois employed on Timelapse... averaging
> the spectral content of an entire buffer and then reproducing that content
> as a sustained drone. Is that possible yet with the Web Audio API as it
> currently exists?
>
> Thanks,
> Matt Diamond
>
Received on Tuesday, 12 April 2016 23:33:20 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 12 April 2016 23:33:21 UTC