Re: [web-audio-api] (setValueCurveAtTime): AudioParam.setValueCurveAtTime (#131)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17335#12) by redman on W3C Bugzilla. Tue, 11 Dec 2012 19:34:44 GMT

(In reply to [comment #12](#issuecomment-24244460))
> Here's my take:
> 
> AudioParam already has several ways of being controlled:
> 
> 1) linear and exponential ramps: these are *very* well established ways of
> controlling parameters going back decades in computer music systems and
> synth design.  Furthermore, these can be a-rate, so much smoother than
> systems which work exclusively using k-rate.
> 
> 2) arbitrary curves: These are mostly useful as slowly varying curves, but
> can be oversampled to a very high degree (2x, 4x, 8x) by providing a
> Float32Array which is far longer than the duration of the curve.  I don't
> think we should be worried about memory performance here since these will
> still generally be much smaller than the audio assets themselves.  This
> oversampling can help to a great degree to band-limit the signal.

I'd agree with this except that it may not be clear for the user that the data should be sufficiently smooth for it to be rendered at higher speeds without artefacts.
While its easy to understand that you can use longer datas to get more precision my guess is that most people won't understand enough of audio to know why their custom curve sounds bad at higher speeds.


> 3) control via audio-rate signals: These signals can be band-limited to the
> extent that source node, and the processing nodes use band-limited
> approaches.
> 

You forgot case 4): directly setting the value without any interpolation. 


> Especially with (3) we have a pretty rich possibility of controlling the
> parameters, including ways which are concerned about band-limited signals.
> 
But it would be intensive to create a flexible envelope generator in js that would generate samples at audio rate. 

Usually an envelope consists of several segments of functions that are controlled independantly.
What you want is to be able to glue these segments together at different rates.
So a decay segment could be the same series of samples as the release but at a different rate.
If you were to use a sample generator you would have to calculate the length of these segments at the desired speed before you can schedule the segments.
You would also have to somehow cascade several of these generators and switch between them. Getting this stuff right will not be fun and it would be more clear if you could just use the curve function to map a curve sample to a specified time.



> This does bring up other areas of the API which need to be concerned with
> aliasing:
> 
> AudioBufferSourceNode: currently its interpolation method is unspecified. 
> WebKit uses linear interpolation, but cubic, and higher order methods could
> be specified using an attribute.

For samples i'd suggest a FIR filter with a SinC kernel if you implement anything more fancy than linear.
Such a filter would be usable in both the oversampled and undersampled case.
Cubic would only sparsingly be appropriate and then mostly if the original data represents an already smooth function. It will certainly get you in trouble with periodic signals of mid to high frequency.



> OscillatorNode: once again the quality could be controlled via attribute. 
> WebKit currently implements a fairly high-quality interpolation here
 
Do you mean the .frequency parameter?
If so, then the interpolation here is almost useless without a way to control the rate of change. Now you need to circumvent the interpolation to get a straight note out of the oscillator.

> WaveShaperNode: there are two aspects of interest here:
> 1) How is the .curve attribute sampled?  Currently the spec defines this as
> "drop sample" interpolation and not even linear, but we should consider the
> option of linear.  I'm concerned about this one because I notice people are
> using the WaveShaperNode for distortion with relatively small curves (such
> as 8192 in tuna.js) which will end up not only shaping the signal, but
> adding a bit-crushing/bit-decimation effect, which may or may not be the
> effect wanted)
> 

I agree that short curves will lead to extra degradation.
But both the interpolated and non-interpolated cases are interesting from a musical point of view. In other words, it would be realy cool if there was interpolation but it needs to be optional.
The objective for this interpolator would be to create a smooth curve (without peaks or resonances like, for instance, cubic or sinc would do). The non-linearity of such an interpolator can be welcome in the case of heavy distortion. In fact, the whole point of distortion is to introduce non-linear change of the waveform. A classic clipping distortion is full of alias-like distortion, for instance.
So here the requirements of an interpolator are different than in the case of resampling.
All of this is great for raw sounds.

> 2) Is the wave-shaping curve applied at the AudioContext sample-rate, or is
> the signal first up-sampled to a higher sample-rate to avoid aliasing??  The
> option to have band-limited wave-shaping will become more and more important
> with the advent of applications like guitar amp simulations.  Aliasing can
> seriously affect the quality of the distortion sound.  We know people are
> interested in these kind of applications, since they're already showing up:
> (Stuart Memo's work, tuna.js, and
> http://dashersw.github.com/pedalboard.js/demo/)

It would be super if the algorithm does oversample.
This would allow especially subtle use of the wave-shaper.
Possible problem is that you may have to oversample a couple of times to get a proper aa'ed distortion.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/131#issuecomment-24244469

Received on Wednesday, 11 September 2013 14:36:00 UTC