- From: Philip Jägenstedt <philipj@opera.com>
- Date: Tue, 12 Jul 2011 10:25:20 +0200
- To: public-audio@w3.org
On Mon, 11 Jul 2011 08:11:22 +0200, Grant Galitz <grantgalitz@gmail.com> wrote: > I'll briefly compare the mozilla audio data api and the web audio api and > run through this list of what can be improved upon in web audio. > > - Web Audio does not allow resampling, this is a major thorn in probably > a > couple people's butts, because I have to do this in JavaScript manually. > If > there is a security concern for bottlenecking, then I'd assume we could > throw in some implementation-side limitations on the number of concurrent > supposed resampling nodes that could be run at the same time. > > - Web Audio forces the JavaScript developer to maintain an audio buffer > in > JavaScript. This applies for audio that cannot be timed to the web audio > callback, such as an app timed by setInterval that has to produce x > samples > every x milliseconds. The Mozilla Audio Data API allows the JS developer > to > push samples to the browser and let the browser manage the buffer on its > own. The callback grabbing x number of samples every call is not a > buffer on > its own, that's the callback sampling the whole buffer of what I'm > talking > about. Buffer ring management in JavaScript takes up some CPU load and it > would always be better in my opinion to let the browser manage such a > task. > > - "The callback method knows how often to fire," this is a fallacy, even > flash falls for this issue and can produce clicks and pops on real-time > generated audio (Even their docs hint at this). This is because by the > time > the callback API figures out a delay, its buffering may be premature due > to > previous calculations and may as a result gap the audio. It is imperative > you let the developer control the buffering process, since only the > developer would truly know how much buffering is needed. Web Audio in > chrome > gaps out for instance when we're drawing to a canvas stretched to > fullscreen > and a canvas op takes a few milliseconds to perform, to a reasonable > person > this would seem inappropriate. This ties in basically with the previous > point of letting the browser manage the buffer passed to it, and allowing > the JS developer to buffer ahead of time rather than having a real-time > thread try to play catch-up with an inherently bad plan. > > - Building up on the last point, in order to achieve ahead-of-time > buffering, I believe it would be wise to either introduce a stub function > that allows samples to be added at any time without waiting for a > callback, > just like mozWriteAudio, OR to allow the callback method to be called > when > buffering reaches a specified low point *specified* by the developer. > This > low point is not how many samples are to be sent to the browser each > callback, but lets the API know WHEN to fire the callback, with the > firing > being at a certain number of samples before buffer empty. > > I hope we can use some or all of these points listed in providing a > proper > API for real-time generated audio output in JavaScript in a 21st century > browser. :D Something like <http://0pointer.de/blog/projects/pulse-glitch-free.html> might be of interest here. The model is basically a ring buffer where you can overwrite any data at any point. This allows you to buffer up a lot of data when latency is not critical, but to rewrite data when latency is an issue. The callback method is basically a pull model. AFAICT, the only way to achieve low latency is by having the callback fire very late, increasing the risk of missing the deadline. Mozilla's (old) API is a push model. AFAICT, the only way to achieve low latency is by filling up very little data at a time, again increasing the risk of gaps. Perhaps an overwriteable ring buffer model is more complicated, but it is very flexible in allowing the application to pick the (necessary) trade-off between latency and risk of gaps. -- Philip Jägenstedt Core Developer Opera Software
Received on Tuesday, 12 July 2011 08:25:38 UTC