W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Detecting time-averaged levels

From: Chris Rogers <crogers@google.com>
Date: Sat, 4 Aug 2012 13:32:55 -0700
Message-ID: <CA+EzO0nY0fZQHw75feMg+-PP3o2fYZ-W7SV-SAu5FQnN7AH6WA@mail.gmail.com>
To: Peter van der Noord <peterdunord@gmail.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
On Sat, Aug 4, 2012 at 3:48 AM, Peter van der Noord

> Aren't these solutions over cpu-intensive (for just measuring a level) and
> overcomplicated? I can imagine that someone who's not that into
> soundtechniques being scared off if they have to use combinations of
> convolvers, filters, compressors to read out the soundlevel of a signal.
> Why not indeed add that to the analyser, seems like the perfect place.
> Peter

Hi Peter, we need to do some work to improve the analyser node.  I tend to
agree with you that "metering" a signal level makes sense as a
simple-to-use, built-in capability.  It's kind of funny, because people are
already using the analyser node to do metering, although in a round-about
way that works around its limitations.  Here's a cool demo by Keven Ennis
which does this:

As far as the other technique of using the convolver, the performance
depends on the length of the impulse response, but could be quite
reasonable in some cases.  A BiquadFilterNode (or several) would be a lot
more efficient than this.

These kind of techniques do illustrate, however, that's it's possible to
combine the nodes in many different ways to get a wide-range of

I think that many people don't fully understand or realize the potential
that these nodes have when used together to produce more complex effects.
 Of the top of my head, here's an incomplete/partial list of some types of
effects that I think are currently possible using the built-in "native"
nodes in combinations.

* gain
* crossfade
* mono blend
* monoizer
* mid-side processing
* arbitrary matrix mixing, B-format etc.
* panning:
   matrix panning

* reverb, ambience, stereoizers, early-reflections, diffusor
* spatialized effects
* delays, spatialized multi-tap, ping-pong, BPM-synced, feedback effects
with lopass filters waveshapers, etc.
* chorus and other modulated delay effects
* filters: basic tone controls, lowpass, highpass, lowshelf, highshelf,
parametric, allpass, notch, graphic EQ, multi-band filters, subsonic,
resonant filters, arbitrary linear filters
    (filter parameters can be automated and can be BPM-synced)
* phasor and other modulated filter effects
* granular synthesis (arbitrary scheduling and grain windows), BPM-synced
* noise and crackle generation
* waveshaping, distortion, overdrive, bit-crushing, multi-band distortion,
amp simulation
* comb filter (with limitations)
* tremolo
* AM (amplitude modulation) effects, including frequency-shifter

* dynamics compression, including multi-band

* oscillators (anti-aliased): subtractive synthesis with flexible filter
combinations (2pole, 4pole, and many more), arbitrary periodic waveforms,
oscillator stacking, detuning, randomization, arbitrary envelopes
*FM synthesis, many different architectures: multi-modulator,
multi-carrier, randomization
* spatialized filtered noise synthesis
* vocoder
* hybrid effects using combinations of the above

I think we cover 95% of typical use cases, and can address many of the
remaining ones using custom processing in JavaScriptAudioNode.

I'm really sorry I haven't had time to write more demos to show some more
advanced uses of the API as I describe.  My life as an engineer at Google
is very demanding and working on the spec takes a lot of time too - you
guys keep me busy!  But, I can assure you that I would rather be writing
cool audio applications and demos :)

Received on Saturday, 4 August 2012 20:33:24 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC