Concerning the gap-less output of real-time generated audio in JavaScript

I'll briefly compare the mozilla audio data api and the web audio api and
run through this list of what can be improved upon in web audio.

- Web Audio does not allow resampling, this is a major thorn in probably a
couple people's butts, because I have to do this in JavaScript manually. If
there is a security concern for bottlenecking, then I'd assume we could
throw in some implementation-side limitations on the number of concurrent
supposed resampling nodes that could be run at the same time.

- Web Audio forces the JavaScript developer to maintain an audio buffer in
JavaScript. This applies for audio that cannot be timed to the web audio
callback, such as an app timed by setInterval that has to produce x samples
every x milliseconds. The Mozilla Audio Data API allows the JS developer to
push samples to the browser and let the browser manage the buffer on its
own. The callback grabbing x number of samples every call is not a buffer on
its own, that's the callback sampling the whole buffer of what I'm talking
about. Buffer ring management in JavaScript takes up some CPU load and it
would always be better in my opinion to let the browser manage such a task.

- "The callback method knows how often to fire," this is a fallacy, even
flash falls for this issue and can produce clicks and pops on real-time
generated audio (Even their docs hint at this). This is because by the time
the callback API figures out a delay, its buffering may be premature due to
previous calculations and may as a result gap the audio. It is imperative
you let the developer control the buffering process, since only the
developer would truly know how much buffering is needed. Web Audio in chrome
gaps out for instance when we're drawing to a canvas stretched to fullscreen
and a canvas op takes a few milliseconds to perform, to a reasonable person
this would seem inappropriate. This ties in basically with the previous
point of letting the browser manage the buffer passed to it, and allowing
the JS developer to buffer ahead of time rather than having a real-time
thread try to play catch-up with an inherently bad plan.

- Building up on the last point, in order to achieve ahead-of-time
buffering, I believe it would be wise to either introduce a stub function
that allows samples to be added at any time without waiting for a callback,
just like mozWriteAudio, OR to allow the callback method to be called when
buffering reaches a specified low point *specified* by the developer. This
low point is not how many samples are to be sent to the browser each
callback, but lets the API know WHEN to fire the callback, with the firing
being at a certain number of samples before buffer empty.

I hope we can use some or all of these points listed in providing a proper
API for real-time generated audio output in JavaScript in a 21st century
browser. :D

Received on Monday, 11 July 2011 06:11:49 UTC