Re: Questioning the current direction of the Web Audio API

Concerning Faust, we already have a hacked WebKit Safari version running on OSX 10.8 with a custom WebAudio API C++ FaustNode that embed the Faust compiler and the LLVM JIT compiler technology. 

This way arbitrary Faust DSP can be "loaded" in the FaustNode, run as native speed, and be connected and used as any other native node....

Is anybody interested to test?

Stéphane 


Le 21 oct. 2013 à 20:35, Patrick Borgeat <patrick.borgeat@gmail.com> a écrit :

> Just some cents,
> 
> I did some experiments with WebAudio API and really liked the native nodes, as they allow for rock solid low latency playback, even when my browser JS thread is stuttering, even on iOS, etc. I always tried to stay away from ScriptProcessorNodes and while, off course, there are limitations you can do quite a lot with native nodes only.
> 
> ScriptProcessorNode is still great in itself, as it allows for more sophisticated processing and it's a great tool for teaching DSP. But I wouldn't trust a ScriptProcessorNode in a game at the moment.
> 
> If I could dream of my perfect v2 of WebAudio API it would have a Node in which you can inject Native Code through a JIT Compiler, and which can only talk to the main JS thread through Audio Parameters, AudioBuffers and maybe other well defined means. This would feel quite like WebGL, where you use GLSL for Shader programming and Uniforms/Buffer Objects for communication with the shader program.
> 
> Faust would be a great language to design these custom Nodes (and they already do JIT compilation).
> http://faust.grame.fr
> 
> But well, I believe this is way out of scope for the API (and a web platform in general), but it actually would make it possible to have custom nodes + low latency + native performance :).
> 
> 
> cheers,
> Patrick
> 
> 
> 
> 
> 
> 2013/10/21 Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
>> I did use Jussi Kalliokoski's sink.js (https://github.com/jussi-kalliokoski/sink.js/)
> 
> If you used sink, perhaps your buffer sizes were 4096? .. which would explain the large UI->audio latency you report. Stepping back a bit, reducing buffer sizes will result in audio breaking when you touch the UI even a little. Increasing it will increase the latency. For some applications latency is not a concern and you can chug away in a JS node happily. For others (especially games), latency is paramount and even a 512 sample delay is an experience downgrade, let alone 4096.
> 
>> But once again, I can't imagine that there wouldn't be a solution to that
> 
> The *entire* programmable audio community has been chugging away on low latency audio for decades and pretty much the only folks doing it right are the hardware people ... who limit programmability. Among OS-es, only MacOSX does a decent job today as far as I can tell, unless you're willing to install custom rt linux kernels (pardon my linux ignorance). Trying to do that with a GC-ed language, in a browser, in a single threaded environment, that is sandboxed, with lower power (as in electricity) devices needing to be supported is a much harder problem that has a social component to it as well. Pay note that Android's horrible audio latency is only now getting attention ... and that's for *native* code.
> 
> The kind of problems that are being thrashed out here of late such as shared memory, race conditions, etc. are almost trivial compared to what we'll be screaming at if the engine had 20ms latency across the board by design.
> 
> So ..  I +1 Chris Wilson's view that the current design is a respectable effort. In stead of providing bad latency across the board, it at least provides a great low latency solution for simple cases (and some pretty complex ones too) while not making the problem of arbitrarily programmable audio any worse. From an adoption perspective, it is much easier to get a bunch of efficient native nodes accepted than force all browsers (including mobile devices) to implement a dynamic fast optimizing compiler for a special subset of JS, which needs to run in a realtime critical process, and have GC turned off, or prove that the code won't generate garbage .. just to be able to trigger sounds and play with filtering.
> 
> -Kumar
> 
> On 21 Oct, 2013, at 3:52 PM, s p <sebpiq@gmail.com> wrote:
> 
>> > In all likelihood, we might be here today without even low latency sample triggering for games if that route had been taken.
>> 
>> Impossible to say, since that route hasn't been taken :)
>> 
>> > Did it use canvas graphics too? [...] What buffer durations do you use?
>> 
>> I did use canvas graphics, but with smaller patches, much less objects. Buffers where 256. For cross-browser audio I did use Jussi Kalliokoski's sink.js (https://github.com/jussi-kalliokoski/sink.js/), which I don't remember if it does any kind of optimizations.
>> It's true that I had quite big latency between UI events and sound, which I didn't have time to try to solve this.
>> Honestly I can't comment too much about this problem, as I am just a simple user, not so familiar with browsers architecture. But once again, I can't imagine that there wouldn't be a solution to that, if all the the brain power put into designing a node-based framework had been used on this issue (http://southparkstudios.mtvnimages.com/images/shows/southpark/vertical_video/season_14/sp_1411_clip05.jpg).
>> 
>> > The current solution of native nodes is only partly for speed. The other part is so that they can do their job on another core if available.
>> 
>> Wouldn't it be possible to run a subset of JavaScript on another core?
>> 
>> Those are all problems that would have been encountered if that other route had been taken. But hasn't there been also a lot of problems to solve with the current choices? Lots of thinking put into API design for all those nodes (and - sorry once again - ) re-inventing the wheel?
> 
> 

Received on Monday, 21 October 2013 21:10:43 UTC