W3C home > Mailing lists > Public > public-audio@w3.org > January to March 2012

Re: setValueFunctionAtTime for AudioParam

From: Chris Rogers <crogers@google.com>
Date: Thu, 29 Mar 2012 11:32:16 -0700
Message-ID: <CA+EzO0mUVnfNZYgLvYsn_7eJsC2qNS-_uSBHs9qf4jW9Jz1PEQ@mail.gmail.com>
To: Patrick Borgeat <patrick@borgeat.de>
Cc: public-audio@w3.org
On Thu, Mar 29, 2012 at 12:55 AM, Patrick Borgeat <patrick@borgeat.de>wrote:

> Chris,
>
> I saw this method and agree, that almost every use case can be achieved
> with this method but that it can be hard to allow for a certain degree of
> interactivity (and also under some circumstances might eat up a lot of
> memory.)
>
> In my LFO example, what if the frequency of the sine wave changes with
> another variable that changes with user input. If i would include this
> variable into the closure of the callback function this would be very
> easy.
>

> With setValueCurveAtTime now I have to set short value Arrays and need to
> push small arrays to the AudioParam in short intervals which looks far more
> troublesome to me.
>

I agree that for some applications like an LFO, it becomes trickier.  For
these types of applications I'm envisioning audio-rate signals directly
controlling parameters.  For example, an oscillator audio source can
control the gain of an AudioGainNode for AM effects, or the frequency of
another oscillator (FM).  This adds a whole new level of possibility for
interactive control.


>
> ADSR envelopes (or more complex ones) are also problematic with this push
> behavior. You can't set the complete ADSR envelope value array if the R
> time isn't know. You could  push the AD phase and (OK, point for you )
> schedule the S phase with a constant value. On release you could schedule
> the R phase, but:
>
> The user releases the envelope while still in the D Phase. (The
> precomputed R phase now has to get recomputed, as the start amplitude value
> is now higher than expected, as the envelope never reached the S value). If
> I already have scheduled my S phase, so I need to cancel it (otherwise a
> short R phase would have finished before reaching the S phase and my
> envelope would snap back up).
>

I've already implemented a simple synth using envelopes so I know this can
be done (demo uses slightly simpler envelopes):
http://chromium.googlecode.com/svn/trunk/samples/audio/wavetable-synth.html

The way I'd recommend handling the R phase is to schedule an "exponential
approach" with given time constant:
        void setTargetValueAtTime(in float targetValue, in float time, in
float timeConstant);

Using this technique, no matter what the value happens to be at the time
when R is scheduled, it will make a smooth transition to the new value (0
in the case of release).  In other more "free form" envelopes which need to
be generated according to user interactivity, it's possible to call
setTargetValueAtTime()  many times.

In a previous discussion with Phil Burk, we also discussed the possibility
of an anchorValueAtTime() method:
http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0173.html

In the case where you need to "grab" the current value (wherever it happens
to be) in order to set the starting point for a linear or exponential ramp.
 So interactive free-form "tracing" of parameters (with low latency)
 should be possible.


> cancelScheduleValues would cancel all parameters changes I already
> scheduled, which I probably don't want to do.
>
> This would have been easy if I could just program a callback function or
> some kind of AudioParamJavaScriptNode.
>
>
>
> I understand, that it's dangerous to call JavaScript function inside the
> audio graph so a AudioParam callback method can potential stall the audio.
> But the JavaScriptNode has the same problems too.
>

I understand your interest in being able to simply have the audio engine
callback to generate values.  But I'm afraid it's just not feasible given
the real-time nature of the native code running in a real-time thread.
 It's actually more than dangerous, the audio code is running in another
thread, so it can't call JavaScript code directly at the time when it's
needed, but must tell the JS thread to run a particular callback at a
(much) later time.  There's no way the audio thread can block while waiting
for the result to become available because it is a real-time thread and is
not allowed to block, and would certainly cause glitching most of the time
even if it could.  The JavaScriptAudioNode is different because it uses
buffering.

But here's one idea which may partially get what you need.  Considering
that I've been talking about letting audio sources directly control
parameters (in my examples above for AM and FM), it is conceivable to use a
JavaScriptAudioNode as this audio source.  In this way, although the
parameter changes would have some latency, the JavaScriptAudioNode could
provide this control.

Cheers,
Chris
Received on Thursday, 29 March 2012 18:32:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 29 March 2012 18:32:52 GMT