DelayNode delay timing specifics

When I first read that the DelayNode "delays the incoming audio
signal by a certain amount", I imagined something like

(1)     y(t + d(t)) = x(t)

for input x(t), output y(t) and delay d(t).

However, I get the impression that delay lines are often
implemented as

(2)     y(t) = x(t - d(t)).

So I think I may have been reading more into the statement above
than was intended.

Can we clarify, please, which effect is intended as the effects
differ for variable delays of noticeable length, and for larger
rates of change in delay?

There is some discussion on these two approaches in [1].

Both use cases sound to me like they could be useful, so may I
propose specifying equation (2)?

I propose (2) because, for small rates of change in the delay, (1)
can be approximated by applying changes in the delay at a later
time.  It may be much harder to apply changes in the delay at an
earlier time, if one were to try to use (1) to approximate (2).

If the rate of change in delay becomes large, then (1) introduces
amplitude changes and possibly overlaying behaviour, both of which
may be not easily emulated using (2).  However, I suspect most
implementations are not likely nor expected to handle large rates
of change in delay well anyway.

"When the delay time is changed, the implementation must make the
transition smoothly, without introducing noticeable clicks or
glitches to the audio stream" and the easiest way to do this is to
limit the rate of change in delay.

[1] https://ccrma.stanford.edu/~jos/doppler/Doppler_Simulation_Delay_Lines.html

Received on Sunday, 25 August 2013 18:01:05 UTC