Re: Concerning the gap-less output of real-time generated audio in JavaScript

>  
> - Web Audio forces the JavaScript developer to maintain an audio buffer in JavaScript. This applies for audio that cannot be timed to the web audio callback, such as an app timed by setInterval that has to produce x samples every x milliseconds. The Mozilla Audio Data API allows the JS developer to push samples to the browser and let the browser manage the buffer on its own. The callback grabbing x number of samples every call is not a buffer on its own, that's the callback sampling the whole buffer of what I'm talking about. Buffer ring management in JavaScript takes up some CPU load and it would always be better in my opinion to let the browser manage such a task.
> 
> This is a good point as well. But IMO a more useful approach would be to have the callback API and then an alternate write call that mixes the written buffers into the buffers provided by the callback (if provided) and another one that writes ahead of time, pushing away callbacks. And please, don't make the developer handle the tail, like in mozAudio. Something like node.write(buffer, channelCount = 2, sampleRate = [context default]) and node.writeAhead( -||- );

Yet another approach is to directly request the audio destination node to pre-buffer N samples, which has the effect of requesting its source nodes (and consequently their attached graph of dependents) to provide those samples. This may be easier since it doesn't affect the audio graph on a piecemeal basis, doesn't raise issues of how to time-sync "immediate" writes and pipelined writes, and addresses the general (and probably occasional) need to pre-buffer some time interval based on application knowledge.

>  
> - "The callback method knows how often to fire," this is a fallacy, even flash falls for this issue and can produce clicks and pops on real-time generated audio (Even their docs hint at this). This is because by the time the callback API figures out a delay, its buffering may be premature due to previous calculations and may as a result gap the audio. It is imperative you let the developer control the buffering process, since only the developer would truly know how much buffering is needed. Web Audio in chrome gaps out for instance when we're drawing to a canvas stretched to fullscreen and a canvas op takes a few milliseconds to perform, to a reasonable person this would seem inappropriate. This ties in basically with the previous point of letting the browser manage the buffer passed to it, and allowing the JS developer to buffer ahead of time rather than having a real-time thread try to play catch-up with an inherently bad plan.
> 
> You're right, if the callback blocks for longer than the buffer length gets played, pops and cracks are inevitable, but... That's the case of digital real time audio, no matter what platform. Having a write-only API in this case is not an option.

I endorse supporting a blend of approaches (some developer control on buffering, layered on a robust automatic pipeline).  As a corollary I don't think that either approach alone is a cure for all ills.

My experience is that there is generally no way in a complex runtime environment to outright prevent glitches, regardless of how much programmer smarts are in control of buffering. Also, just as developer-written code can "know" things about what the app environment is doing, a self-regulating callback method is able to "know" things about the what the internal browser environment is doing. Both types of knowledge are important.

... .  .    .       Joe

Joe Berkovitz
President
Noteflight LLC
84 Hamilton St, Cambridge, MA 02139
phone: +1 978 314 6271
www.noteflight.com

Received on Monday, 11 July 2011 12:52:21 UTC