Re: Reflections on writing a sequencer

Actually, I don't think that this demo illustrates a good technique for a sequencer. The JavaScriptAudioNode doesn't do anything here except generate events, and there is going to be jitter in these events, just as there is jitter in any other callback.  It is not reliable to use the timing of onaudioprocess events as an indicator of real time, as this demo appears to do.

Using noteOn/noteOff to schedule nodes that produce sound a short time in the future is the way to go. If you are using that technique correctly, you get true sample-accurate timing and very little sensitivity to the callback mechanism.

> 
>    If you put an audioContext.currentTime in your JavaScriptAudioNode.onprocessaudio() function you will notice some slop in the time it reports (that's fine, I suppose, if it is accurately reflecting the jitter in the callbacks). But what you would really like, presumably, is to know the exact time in the sample stream that buffer you are filling corresponds to. To do that, you just need to keep track of the number of samples you have processed since starting. This would produce rock solid timing of audio events even if the buffer size changed on every callback or if there was jitter in the interval between callbacks.

An AudioProcessingEvent exposes the exact time of the audio to be generated in the sample stream as the "playbackTime" attribute.  Not that this makes callbacks any more useful as a source of exact timing, but it does mean that there is no need to keep track of time in separate variables.

...Joe

Received on Thursday, 26 July 2012 13:25:28 UTC