Re: Web Audio API questions and comments

On Wed, Jun 20, 2012 at 2:51 AM, Joe Turner <joe@oampo.co.uk> wrote:

> Hi Chris,
> Thanks for the response - I think you've cleared up lots of this for me.
>
> On Tue, Jun 19, 2012 at 8:35 PM, Chris Rogers <crogers@google.com> wrote:
> >
> >
> > On Tue, Jun 19, 2012 at 7:39 AM, Joe Turner <joe@oampo.co.uk> wrote:
> >>
> >>
> >> - AudioNode cycles with JavaScriptAudioNodes
> >>
> >> Given that JavaScriptAudioNodes have an inherent delay built in they
> >> should be fine to use in feedback loops as well as DelayNodes I think.
> >>  Is this correct?
> >
> >
> > I think it will be ok, although the latency of the JavaScriptAudioNode
> will
> > factor into the overall delay.  There would be some limits on very small
> > delay sizes.  But in many practical cases, this won't be an issue.
> >
>
> Yeah - can this be changed in the specification then so it won't throw
> an exception?  I could see this being handy.
>

I know we've discussed different strategies, including throwing exceptions,
but I don't remember adding that to the spec yet.  I hope we can avoid
dealing with exceptions in this case.


>
> >>
> >>
> >> - Control rate AudioParams and interpolation
> >>
> >> Will there ever be a use for interpolation between values with a
> >> control rate AudioParam rather than skipping to the next value at the
> >> start of each block?  The Supercollider documentation [3] mentions
> >> that this technique is used in some UGens which seems plausible, but
> >> I'm not clear on when or why this is appropriate.  Does something like
> >> this need specifying?
> >
> >
> > I'm not sure what you mean exactly.  All the "automation" methods on
> > AudioParam will generate the parameter values at a-rate which is
> > high-resolution.
>
> Oh, I think I've been an idiot here.  Apologies - ignore this!
>

No worries :) There are some subtleties such as some parameters being
intrinsically k-rate (like .attack, .release of DynamicsCompressorNode) and
others being a-rate (like .frequency of Oscillator).  But hopefully this is
all in the spec now.


>
> >>
> >>
> >> - AudioGainNode dezippering
> >>
> >> Can this at least be optional?  If I'm using an AudioGainNode to scale
> >> an audio node so it can control an AudioParam (for example to act as
> >> an lfo), then I don't want the output to be filtered in any way.
> >
> >
> > Yes, this is actually already the case.  I haven't yet explained
> > de-zippering very well in the specification.  But de-zippering only
> really
> > applies if .value changes to the AudioParam are being made directly,
> instead
> > of via audio-rate signals or via "automation" APIs.  In other words, if
> > somebody is changing a gain value:
> >
> > gainNode.gain.value = x;
> >
> > Then that value will be de-zippered.  But, if you're calling
> > linearRampToValueAtTime(), or connecting an audio-rate signal to the
> > parameter, then it will take the exact value from those signals.
> >
> >
>
> Ah, okay - this makes sense.  Does setValueAtTime use de-zippering?
>

No, as with the other "automation" APIs of AudioParam, it will not do
de-zippering since the idea is to specify the exact values at a given time
which de-zippering would interfere with.  However, you can implement your
own form of "scheduled" automation de-zippering by using
setTargetValueAtTime() instead.


>
> >>
> >> Although a better solution may be:
> >>
> >> - Operator nodes, AudioParam nodes
> >>
> >> Can we have separate nodes for the basic mathematical operators (add,
> >> subtract, divide, multiply and modulo), and a way of having the output
> >> of an AudioParam as a signal?
> >
> >
> > We already have add, subtract, and multiply.  You can also get many other
> > transformations by using a WaveShaperNode.  There are probably some
> > operations which would not be possible, but I think they would be *much*
> > more specialized.  And in these cases a JavaScriptAudioNode could help
> out.
> >
> >
> >>
> >>  This would allow all the flexibility
> >> needed for scaling, offsetting and combining signals in order to
> >> control parameters.
> >
> >
> > I think we already have these with the built-in mixing and the
> > AudioGainNode, etc.
> >
> >>
> >>  I know a bit of trickery can make stuff like this
> >> possible at the moment, and it's trivial to implement in JavaScript,
> >> but it seems like core functionality to me.
> >
> >
> > I'm open to suggestions, but think many of the things you've mentioned
> are
> > already possible.
>
> Here's where I tend to disagree a little.  It seems unintuitive to me
> to be doing audio maths using the WaveShaperNode.  For example, say I
> want to get the reciprocal of a signal.  In order to do this I have
> two options.  One is to write a JavaScriptAudioNode - this is trivial,
> but now my Synth has four times the latency it had before.  My other
> option is to create a Float32Array and fill it with a 1/x curve then
> make a WaveShaperNode from this.  This, I would argue, is:
> a) Non-trivial - I always get the maths with the indices wrong the
> first time when creating lookup tables (although that might just be
> me...)
> b) Gives a 'worse' result - we are using a lookup table rather than
> doing the maths directly
> c) Non-obvious - the specification says that the WaveShaperNode is for
> creating "non-linear distortion effects", which is not what I'm trying
> to do
>
> I can see that a 20 line JavaScript library would sort this out (and
> would be the first thing I included in any Web Audio API project), but
> making it non-trivial to do maths on the audio stream, create
> constants etc. for the sake of reducing the number of nodes by one or
> two seems like a strange decision.
>

I agree with you for a 1/x curve, and other less commonly used curves.  But
the DC-offset, addition, subtraction, and multiplication can all be done
without a WaveShaperNode.  And these are the ones which I thought would
cover a large number of the common cases.  Division is very much less used,
so I didn't think to define a node specially for it.

Chris

Received on Wednesday, 20 June 2012 21:30:12 UTC