- From: Joseph Berkovitz <joe@noteflight.com>
- Date: Wed, 16 Nov 2011 20:34:41 -0500
- To: Jonathan Baudanza <jon@jonb.org>
- Cc: public-audio@w3.org
- Message-Id: <0B7EAA69-1522-4044-9BAC-145304356E56@noteflight.com>
Jon, There is a middle ground between scheduling all your nodes, and having a 100 ms timeout. What I've done so far in my web audio synth code is to set a timeout at a longer interval (I think mine was 1 s), and schedule a larger number of nodes a longer distance into the future. This makes it much less likely that a callback delay will affect playback. In essence you are keeping the scheduler "topped up" so it always has a fair number of nodes to play back. That might mitigate some of the need for this feature although I see the value of it. ... . . . Joe Joe Berkovitz President Noteflight LLC 84 Hamilton St, Cambridge, MA 02139 phone: +1 978 314 6271 www.noteflight.com On Nov 16, 2011, at 6:14 PM, Jonathan Baudanza wrote: > > > On Wed, Nov 16, 2011 at 2:08 PM, Robert O'Callahan <robert@ocallahan.org> wrote: > > For the use-case of "I have a really long playlist and I don't want to schedule it all up-front" (which I think is what https://bugs.webkit.org/show_bug.cgi?id=70061#c0 is about), I think the best solution is to use setTimeouts and just make sure that you keep the playlist up-to-date at least a few seconds ahead of the current time so that it doesn't under-run. > > For interactive stuff, I don't see how running script at a particular moment in the future helps you. Wouldn't your input event handler directly schedule whatever new playback elements it needs to trigger? > > > Just for reference, my app is www.beatlab.com. In my webaudio branch, I have a setTimeout callback set at 100ms intervals. At each interval it checks if there is a note coming up, and schedules it. If setTimeout doesn't fire for some reason, sounds aren't scheduled properly. > > I think what you're suggesting is that I schedule all my audio nodes at the beginning of a loop, and then add and remove nodes from the schedule as the user interacts with the application. I don't see any reason why this wouldn't work. The downside is that I would have a lot more audio nodes to manage. The advantage is I wouldn't have to depend on a callback firing at the right time.
Received on Thursday, 17 November 2011 01:35:10 UTC