Re: Questioning the current direction of the Web Audio API

> In all likelihood, we might be here today without even low latency sample
triggering for games if that route had been taken.

Impossible to say, since that route hasn't been taken :)

> Did it use canvas graphics too? [...] What buffer durations do you use?

I did use canvas graphics, but with smaller patches, much less objects.
Buffers where 256. For cross-browser audio I did use Jussi Kalliokoski's
sink.js (https://github.com/jussi-kalliokoski/sink.js/), which I don't
remember if it does any kind of optimizations.
It's true that I had quite big latency between UI events and sound, which I
didn't have time to try to solve this.
Honestly I can't comment too much about this problem, as I am just a simple
user, not so familiar with browsers architecture. But once again, I can't
imagine that there wouldn't be a solution to that, if all the the brain
power put into designing a node-based framework had been used on this issue
(
http://southparkstudios.mtvnimages.com/images/shows/southpark/vertical_video/season_14/sp_1411_clip05.jpg
).

> The current solution of native nodes is only partly for speed. The other
part is so that they can do their job on another core if available.

Wouldn't it be possible to run a subset of JavaScript on another core?

Those are all problems that would have been encountered *if *that other
route had been taken. But hasn't there been also a lot of problems to solve
with the current choices? Lots of thinking put into API design for all
those nodes (and - sorry once again - ) re-inventing the wheel?

Received on Monday, 21 October 2013 10:23:15 UTC