Re: Web Audio API questions and comments

Hey Joe - I can respond to a couple of these.

On Tue, Jun 19, 2012 at 7:39 AM, Joe Turner <joe@oampo.co.uk> wrote:

> - Operator nodes, AudioParam nodes
>
> Can we have separate nodes for the basic mathematical operators (add,
> subtract, divide, multiply and modulo), and a way of having the output
> of an AudioParam as a signal?  This would allow all the flexibility
> needed for scaling, offsetting and combining signals in order to
> control parameters.  I know a bit of trickery can make stuff like this
> possible at the moment, and it's trivial to implement in JavaScript,
> but it seems like core functionality to me.
>

Actually, you should be able to do all of these operations by either
connecting nodes together (add and subtract - might need a waveshaper to
flip phase for subtract), using gainNodes (multiply and divide) - not sure
about how you'd use modulo.


> - Tapping DelayNodes
>
> At the moment it's only possible to tap a delay line at the beginning,
> and the read position cannot be modulated.  This makes it pretty much
> impossible to implement effects such as pitch shifting, chorus,
> phasing, flanging and modulated reverbs, all of which rely on either
> multiple taps or modulated taps.  It would be nice to have something
> similar to Supercollider's BufRd [4] and BufWr [5] with DelayNode
> built as a layer on top of this.  Also AudioBufferSourceNode could be
> a layer on top of a BufRd equivalent.
>

Not sure what you mean - you should be able to build all these.  Remember
you can chain multiple delay nodes in series as well as in parallel.  For
chorus and flanging, I'm currently working on a code sample.

- AudioBufferSourceNode playbackState change event
>
> Would it be useful to have an event fired when the
> AudioBufferSourceNode's state changes?  I can't think of anything off
> the top of my head, but it seems like it could be useful for some
> applications maybe?
>

Actually, it would be interesting to do that; I've wanted to do cleanup
(e.g., change a play button's visual state back when the BSN finishes
playing), and I've currently hacked it by setTimout(buffer.duration*1000).
 That won't work when playbackRate is being manipulated, though.


> - AudioPannerNode
>
> Having never done any work in 3D sound I find this all a bit
> intimidating.  Is there any chance of something simpler built on top
> of this for those of us who want sound to come out of the left
> speaker, the right speaker, or some combination of the two?
>

I'll come up with a sample.  It's actually quite easy to do simple panning.


> - RealTimeAnalyserNode
>
> This seems strange to me - the functionality could be really useful,
> but it seems focused very narrowly on creating visualisations.  I
> think a nicer solution would be to have separate FFT and IFFT nodes so
> frequency domain effects could be integrated into the processing
> chain, and then a separate node which allows access to the FFT or
> waveform data depending on where in the graph it is inserted.  So for
> visualisations you would have an AudioNode connected to an FFTNode,
> connected to a BufferYoinkerNode.
>

This is actually about what realtimeAnalyser is.  What scenario are you
trying to do, exactly?


> - DynamicsCompressorNode sidechaining and lookahead
>
> I'm not sure if these are a bit specialised, a bit of a dark art, or
> both, but they are both common and fairly well defined features of
> compressors which may be useful.  I could see sidechaining being
> especially useful for ducking in broadcast applications.
>

Lookahead is different, of course.  Sidechaining could be accomplished by
connecting the 'reduction'  parameter of the DynamicsCompressor to a gain
node - although since reduction is an AudioParam, it can't get connected
anywhere.  That could be fixed by adding an AudioNode copy of reduction, I
suppose.

-Chris

Received on Tuesday, 19 June 2012 16:34:51 UTC