Re: [web-audio-api] (setValueCurveAtTime): AudioParam.setValueCurveAtTime (#131)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17335#6) by redman on W3C Bugzilla. Thu, 06 Dec 2012 15:15:35 GMT

(In reply to [comment #6](#issuecomment-24244415))
> (In reply to [comment #5](#issuecomment-24244407))
> > Since it is an audio rate controller you should always see it as a signal
> > and apply signal theory.
> 
> True, but in this case I think that the real use case is to use quite
> low-frequency signals (like various forms of ramps that run for at least 20
> ms or so). For those scenarios, band-limiting should not be necessary. As
> long as the spec mandates a certain method of interpolation (e.g. nearest,
> linear or cubic spline), the user knows what to expect and will not try to
> make other things with it (like modulating a signal with a high-frequency
> waveform).
> 
> Also, I think it's important that all implementations behave equally here,
> because different interpolation & filtering methods can lead to quite
> different results. E.g. a 5 second fade-out would sound quite different if
> it used nearest interpolation instead of cubic spline interpolation. In that
> respect, a simpler and more performance friendly solution (like nearest or
> linear interpolation) is better, because it's easier to mandate for all
> implementations.

I can tell you from years of synthesis experience that the resolution/quality of envelopes is crucial. This is especially true for creating percussive sounds.
Let's say i have a row of samples of an exponentially rising value that i want to use as the attack of my sound (it's modulating a .value of a gain module).
Now what happens if you resample that row at a higher rate than the original is that samples get skipped.
Most importantly, there is a good chance the last value of my exponential curve (the one hitting the maximum) will be X-ed. So suddenly, my percussive sound misses energy! (that's besides the fact that there is also aliasing involved) And moreover, it will sound differently depending on how the curve data is mapped to the output samplerate for that particular note. So any dynamics applied to, say, the time the curve runs will result in inproper spectral changes to my persussive sound.

So undersampling will work well only if people use pre-filtered sampledata as curves and even then there is a chance not all energy will come through as the user must make sure the curve data is never played back too fast.
This is very limiting as such curves are used in the time range of 1ms to minutes. In other words, the curve processor needs to handle an enourmous range of playback rates and the results should be predictable.

 With naive undersampling the results will become increasingly unpredictable the more of the curves features (its rough parts) fall outside the audio band frequency wise.
Remember that these curves will be used heavily as envelopes and envelopes have an important role as impulse generators. If you undersample them you litterarily periodically remove energy from the impulse it represents. You need a proper downsample algorithm that will preserve inband energy and hopefully keep the phases together to not smear out the impulse too much.
Otherwise we could just as well go back 15 years to a time when programmers started to try to make musical instruments. ;)
But what worries me more is that it is not clear to the user that the data he uses for the curve might be inproper because of curve playback rate. For instance, how long do i need to make my curve data to not get any problems when i use it for a range between 3 and 3000ms?
I'm not sure people want to think about these things. If you offer such a feature then i'd expect the implementation to deal with it correctly.

About undersampling, after some thought i'd say that both nearest neighbor and linear interpolation could be handy.
The nearest neighor method should be done correctly tho (no idea what the implementations do, but chances are they do it wrong :) ).

Usually such an algorithm has a balance point at .5 poits. A comparison is made to see if the value at a time is closer to the pevious or the next sample and the output will switch halfway between the samples. 
This will give problems with short stretched curves. The first sample of the curve will be played back for half a time period (because at 0.5 sample time the switch to the next value is made), then all middle samples will be played at full period (but shifted by 0.5 sampletime) and then the last one at half period again.
A better way would be to make it more like truncating the decimals. This way you ensure every sample value in the curve data will get played for the correct duration which makes much more sense musically.
So for these kinds of things truncation is better than the usual way of proper rounding around the .5 mark.

But then sometimes you don't want to hear these steps at all.
For those cases it would be great if you could switch on some linear interpolator (shouldnt be a bigger hit on performance than the truncation above except in the case when the cpu doesnt do floats well)
Main idea is it should be switchable.

More fancy interpolation is propably not very usefull in this case.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/131#issuecomment-24244424

Received on Wednesday, 11 September 2013 14:35:26 UTC