Re: Questioning the current direction of the Web Audio API

> I did use Jussi Kalliokoski's sink.js (

If you used sink, perhaps your buffer sizes were 4096? .. which would explain the large UI->audio latency you report. Stepping back a bit, reducing buffer sizes will result in audio breaking when you touch the UI even a little. Increasing it will increase the latency. For some applications latency is not a concern and you can chug away in a JS node happily. For others (especially games), latency is paramount and even a 512 sample delay is an experience downgrade, let alone 4096.

> But once again, I can't imagine that there wouldn't be a solution to that

The *entire* programmable audio community has been chugging away on low latency audio for decades and pretty much the only folks doing it right are the hardware people ... who limit programmability. Among OS-es, only MacOSX does a decent job today as far as I can tell, unless you're willing to install custom rt linux kernels (pardon my linux ignorance). Trying to do that with a GC-ed language, in a browser, in a single threaded environment, that is sandboxed, with lower power (as in electricity) devices needing to be supported is a much harder problem that has a social component to it as well. Pay note that Android's horrible audio latency is only now getting attention ... and that's for *native* code.

The kind of problems that are being thrashed out here of late such as shared memory, race conditions, etc. are almost trivial compared to what we'll be screaming at if the engine had 20ms latency across the board by design.

So ..  I +1 Chris Wilson's view that the current design is a respectable effort. In stead of providing bad latency across the board, it at least provides a great low latency solution for simple cases (and some pretty complex ones too) while not making the problem of arbitrarily programmable audio any worse. From an adoption perspective, it is much easier to get a bunch of efficient native nodes accepted than force all browsers (including mobile devices) to implement a dynamic fast optimizing compiler for a special subset of JS, which needs to run in a realtime critical process, and have GC turned off, or prove that the code won't generate garbage .. just to be able to trigger sounds and play with filtering.


On 21 Oct, 2013, at 3:52 PM, s p <> wrote:

> > In all likelihood, we might be here today without even low latency sample triggering for games if that route had been taken.
> Impossible to say, since that route hasn't been taken :)
> > Did it use canvas graphics too? [...] What buffer durations do you use?
> I did use canvas graphics, but with smaller patches, much less objects. Buffers where 256. For cross-browser audio I did use Jussi Kalliokoski's sink.js (, which I don't remember if it does any kind of optimizations.
> It's true that I had quite big latency between UI events and sound, which I didn't have time to try to solve this.
> Honestly I can't comment too much about this problem, as I am just a simple user, not so familiar with browsers architecture. But once again, I can't imagine that there wouldn't be a solution to that, if all the the brain power put into designing a node-based framework had been used on this issue (
> > The current solution of native nodes is only partly for speed. The other part is so that they can do their job on another core if available.
> Wouldn't it be possible to run a subset of JavaScript on another core?
> Those are all problems that would have been encountered if that other route had been taken. But hasn't there been also a lot of problems to solve with the current choices? Lots of thinking put into API design for all those nodes (and - sorry once again - ) re-inventing the wheel?

Received on Monday, 21 October 2013 18:11:57 UTC