W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Reflections on writing a sequencer

From: lonce wyse <lonce.wyse@zwhome.org>
Date: Thu, 26 Jul 2012 09:14:06 +0800
Message-ID: <501099DE.9010608@zwhome.org>
To: r baxter <baxrob@gmail.com>
CC: Joe Berkovitz <joe@noteflight.com>, public-audio@w3.org

Hi Roby, Joe, All,

     Nice demo.
     I'm convinced that this is the right way to trigger audio events 
(rather than using timer callbacks or requestAnimationFrame() in 
conjunction with audioContext.currentTime). This is because you could 
start a sample counter when you hit "start" so that you could get sub 
-buffersize (indeed sample-accurate time intervals) between note onsets.

     If you put an audioContext.currentTime in your 
JavaScriptAudioNode.onprocessaudio() function you will notice some slop 
in the time it reports (that's fine, I suppose, if it is accurately 
reflecting the jitter in the callbacks). But what you would really like, 
presumably, is to know the exact time in the sample stream that buffer 
you are filling corresponds to. To do that, you just need to keep track 
of the number of samples you have processed since starting. This would 
produce rock solid timing of audio events even if the buffer size 
changed on every callback or if there was jitter in the interval between 

             - lonce

On 26/7/2012 2:29 AM, r baxter wrote:
> Hi all,
> I've been playing around with this this morning.
> See http://jsfiddle.net/UpaCH/
> Seems to work basically as expected (though, oddly, only with jsNode
> buffer size of 2048 - on my system at least)
> -Roby
> On Wed, Jul 25, 2012 at 6:00 AM, lonce wyse<lonce.wyse@zwhome.org>  wrote:
>> Hi -
>>      Yes I realized that as I hit the email send button - however, it
>> actually isn't the periodicity of the callback that matters. They could be
>> aperiodic and the buffers to fill could be of different length - as long as
>> you know what time the sample buffer in the callback represents.
>> - lonce
>> On 25/7/2012 8:46 PM, Joe Berkovitz wrote:
>> One other important point I overlooked: JSAudioNode processing callbacks are
>> not sample accurate in terms of absolute time. They may jitter around since
>> they precede actual sound output by a variable amount depending on the audio
>> pipeline's overall latency at the time. The browser is free to play around
>> with this latency to provide glitch free output.
>> So it doesn't really provide you with the "rock solid" timing that you might
>> expect.
>> ...j
>> On Jul 25, 2012 8:28 AM, "lonce wyse"<lonce.wyse@zwhome.org>  wrote:
>>> Hi  -
>>>      Of course, you would want to generate events as short a time in to the
>>> future as possible in order to stay responsive to rate (or tempo) changes.
>>>      Ideally a JavaScriptAudioNode could be used as the event generator.
>>> It's onaudioprocess() method could check the length of the output buffer it
>>> is passed, and do nothing else but call "note on" events for other nodes it
>>> wants to play within that short period of time.
>>>      I haven't tried that yet, but would noteon events be handled properly
>>> when generated in this "just in time" manner? Would this be a violation of
>>> protocol to use a onaudioprocess() as what would amount to a rock-solid
>>> sample-accurate periodic callback function?
>>> Best,
>>>               - lonce
>>> On 25/7/2012 12:40 AM, Joseph Berkovitz wrote:
>>> HI Adam,
>>> I think one general way to structure sequencer playback is as follows --
>>> I've used this approach with WebAudio successfully in the past:
>>> 1. Just before starting playback, take note of the AudioContext's
>>> currentTime property.  Add a small time offset to it, say 100 ms.  The
>>> result will be your performance start time, corresponding to time offset
>>> zero in your sequencer data.  (The time offset provides a short window in
>>> which to schedule the first events in the sequence).
>>> 2. Create a scheduler function that will run periodically, which examines
>>> the AudioContext's currentTime and subtracts the previously captured
>>> startTime. That gives you a "current performance time" at the moment the
>>> callback occurs, expressed in terms of your sequencer data.  Then create
>>> AudioNodes representing all sequencer events that occur within an arbitrary
>>> time window after this current performance time (say, several seconds) and
>>> schedule them with noteOn/noteOff.
>>> 3. Call the function immediately, and also use setInterval() or
>>> setTimeout() to schedule callbacks to the above function on some reasonable
>>> basis, say every 100-200 ms. The exact interval is not important and can be
>>> tuned for best performance.
>>> This approach is relatively insensitive to callback timing and in general
>>> allows audio to be scheduled an arbitrary interval in advance of its being
>>> played.
>>> ...Joe
>>> On Jul 24, 2012, at 11:40 AM, Adam Goode wrote:
>>> Hi,
>>> Yesterday I tried to write an extremely simple sequencer using webaudio.
>>> My goal was to have a tone play periodically, at a user-selectable low
>>> frequency interval.
>>> The main problem I ran into was the difficulties in scheduling events
>>> synchronized with the a-rate clock.
>>> If I want to play a tone twice per second, I want to call this code in a
>>> loop, indefinitely:
>>> var startTime = ....
>>> var o = c.createOscillator();
>>> o.connect(c.destination);
>>> o.noteOn(startTime);
>>> o.noteOff(startTime + 0.1);
>>> I can't just put it in a loop, I need to schedule this in a callback, when
>>> necessary to fill the event queue. But what callback to use? setInterval is
>>> not appropriate, since the setInterval clock will skew quickly from
>>> c.currentTime. And busy looping with setInterval(0) will consume a lot of
>>> CPU and gets throttled when switching tabs (try putting the drum machine
>>> demo in a background tab and see).
>>> My solution was this:
>>> var controlOscillator = c.createOscillator();
>>> controlOscillator.frequency.value = 2;
>>> var js = c.createJavaScriptNode(256, 1, 0);
>>> controlOscillator.connect(js);
>>> js.onaudioprocess = function(e) {
>>>    ... detect positive zero crossing from control oscillator ...
>>>    if (zeroCross) {
>>>      var o = c.createOscillator();
>>>      o.connect(c.destination);
>>>      var startTime = ... zero crossing offset + playbackTime ...
>>>      o.noteOn(startTime);
>>>      o.noteOff(startTime + 0.1);
>>>    }
>>> };
>>> This does work (except for missing playbackTime
>>> https://bugs.webkit.org/show_bug.cgi?id=61524, needing to connect the
>>> javascript node to destination, and another bug on chrome
>>> http://crbug.com/138646), but is awkward. There is also the question of
>>> having a disconnected graph: I am sending control data, not audio data, so I
>>> don't want to connect it to destination.
>>> I essentially want to have a callback for getting new control data, to
>>> keep the event pipeline filled without overflowing any noteOn buffer or
>>> falling behind. Is the javascript node appropriate for this? I feel like
>>> there could be something more explicit, like a setInterval off of the audio
>>> context.
>>> Adam
>>> ... .  .    .       Joe
>>> Joe Berkovitz
>>> President
>>> Noteflight LLC
>>> 84 Hamilton St, Cambridge, MA 02139
>>> phone: +1 978 314 6271
>>> www.noteflight.com
Received on Thursday, 26 July 2012 01:14:41 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:01 UTC