Re: onaudioprocess() as event generator (WAS: Reflections on writing a sequencer)

Hello,

     What is the algorithm WebAudio uses for scheduling the order in 
which different AudioNode.onaudioprocess() methods are called?
     Is it the order in which they are created?

     I think that in my "event generation" example, the only reason I 
could generate events in one node for another *within* the time of the 
onaudioprocess() buffer, is because the onaudioprocess() callback of the 
node receiving events was luckily being scheduled after the one for the 
event generator node.

Thanks,
                  - lonce


On 28/7/2012 10:12 AM, lonce wyse wrote:
>
> Hello,
>
>     Joe, you were right, of course, that generating events in 
> onaudioprocess() provides no accuracy advantages over other methods. 
> It is just a callback like any other in that respect.
>
>     There are a couple reasons, why you might want to use a 
> JavaScriptAudioNode onaudioprocess() as a callback for generating events.
>     a) the buffer is "the right size" allowing you to generate events 
> with the same kind of real-time responsiveness as the audio synthesis,
>     b) permits you to intimately coordinate audio generation with 
> event generation if you are so inclined.
>
>     Anyway, I was curious about whether events generated within the 
> onaudioprocess() buffer period would be handled in a timely fashion, 
> so I wrote this little test model:
> http://anclab.org/webaudio/timing/noiseTick.html
>
>     You can adjust the size of the "compute ahead" window. As you will 
> see, you can reduce that time to pretty much exactly the audio buffer 
> size (extending perhaps a few ms to cover jitter) with excellent 
> timing performance.
>
> Best,
>              - lonce
>
>
> On 26/7/2012 9:24 PM, Joseph Berkovitz wrote:
>> Actually, I don't think that this demo illustrates a good technique 
>> for a sequencer. The JavaScriptAudioNode doesn't do anything here 
>> except generate events, and there is going to be jitter in these 
>> events, just as there is jitter in any other callback.  It is not 
>> reliable to use the timing of onaudioprocess events as an indicator 
>> of real time, as this demo appears to do.
>>
>> Using noteOn/noteOff to schedule nodes that produce sound a short 
>> time in the future is the way to go. If you are using that technique 
>> correctly, you get true sample-accurate timing and very little 
>> sensitivity to the callback mechanism.
>>
>>>     If you put an audioContext.currentTime in your 
>>> JavaScriptAudioNode.onprocessaudio() function you will notice some 
>>> slop in the time it reports (that's fine, I suppose, if it is 
>>> accurately reflecting the jitter in the callbacks). But what you 
>>> would really like, presumably, is to know the exact time in the 
>>> sample stream that buffer you are filling corresponds to. To do 
>>> that, you just need to keep track of the number of samples you have 
>>> processed since starting. This would produce rock solid timing of 
>>> audio events even if the buffer size changed on every callback or if 
>>> there was jitter in the interval between callbacks.
>> An AudioProcessingEvent exposes the exact time of the audio to be 
>> generated in the sample stream as the "playbackTime" attribute.  Not 
>> that this makes callbacks any more useful as a source of exact 
>> timing, but it does mean that there is no need to keep track of time 
>> in separate variables.
>>
>> ...Joe
>
>
>
>

Received on Sunday, 29 July 2012 01:23:39 UTC