- From: Patrick Borgeat <patrick@borgeat.de>
- Date: Thu, 29 Mar 2012 09:55:34 +0200
- To: Chris Rogers <crogers@google.com>
- Cc: public-audio@w3.org
- Message-Id: <61E405DF-03BA-42F7-A9ED-451EBE49EE02@borgeat.de>
Chris, I saw this method and agree, that almost every use case can be achieved with this method but that it can be hard to allow for a certain degree of interactivity (and also under some circumstances might eat up a lot of memory.) In my LFO example, what if the frequency of the sine wave changes with another variable that changes with user input. If i would include this variable into the closure of the callback function this would be very easy. With setValueCurveAtTime now I have to set short value Arrays and need to push small arrays to the AudioParam in short intervals which looks far more troublesome to me. ADSR envelopes (or more complex ones) are also problematic with this push behavior. You can't set the complete ADSR envelope value array if the R time isn't know. You could push the AD phase and (OK, point for you …) schedule the S phase with a constant value. On release you could schedule the R phase, but: The user releases the envelope while still in the D Phase. (The precomputed R phase now has to get recomputed, as the start amplitude value is now higher than expected, as the envelope never reached the S value). If I already have scheduled my S phase, so I need to cancel it (otherwise a short R phase would have finished before reaching the S phase and my envelope would snap back up). cancelScheduleValues would cancel all parameters changes I already scheduled, which I probably don't want to do. This would have been easy if I could just program a callback function or some kind of AudioParamJavaScriptNode. I understand, that it's dangerous to call JavaScript function inside the audio graph so a AudioParam callback method can potential stall the audio. But the JavaScriptNode has the same problems too. It's good to have basic underlying implementations in C to match 90% of all problems but the current AudioParam approach looks hard to use to me for complex interactive settings. Equipping the callback set Method with a buffersize and a resolution could make things more efficient (less calls, less computation in JavaScript). If fast responses aren't needed (large buffer size) and interpolation (in the C engine) works well for the kind of automation data. If bufferSize is small and resolution is 1 (sample accurate) you get fast responses, but off course the programmer has to be warned that this can create audio drop outs. cheers, Patrick Am 29.03.2012 um 02:55 schrieb Chris Rogers: > Hi Patrick, > > Sorry, somehow I missed this thread. > > I think what you might be looking for is: > > partial interface AudioParam { > void setValueCurveAtTime(in Float32Array values, in float time, in float duration); > } > > In this way, arbitrary sample-accurate parameter value changes can be scheduled. This array could be filled with a slow sine function as in your example. As a second example, a high-resolution amplitude envelope with arbitrary shape (more complex than standard ADSR) could be applied to an audio sample. > > Instead of a callbacks from the audio engine to JavaScript as you're envisioning (which would be much less efficient), the control flow works in the opposite direction, with the JavaScript pushing precise curves to the parameter at exact times. > > Chris
Received on Thursday, 29 March 2012 07:56:09 UTC