Re: Welcome!

Hello everyone!

> Since this group was created quite recently, I do not want to rush to the formal parts just yet (scope of work, decision processes, communication preferences, selecting a chair, etc). Let's wait a bit until more people have had the chance to join etc.

Thank you Marcus for creating the group, I totally agree that the bureaucracy can wait a bit too.

But to get discussion started Iím going to comment a bit on my thoughts of the DSP API[1], and what I think are interesting use cases for Ďarray mathí and how I think it could (and should) interact with other emerging technologies.

I know it changed name in the latest iterations, but I Ďgrew upí with it being called the DSP API so I am going to call it that for the remainder of this mail. For those of you who didnít, Iím looking at the spec called 3 in the original mail.

> Real-time audio processing entails a few requirements that make it slightly more difficult than some other forms of data and signal processing. Especially important is low latency (typically microseconds rather than milliseconds), low CPU overhead, and for garbage collected languages such as JavaScript the GC activity must be minimal.

I think this is a use-case we must support, it is important for the long-term evolution of the Web Audio API, and WebGL and in the future WebCL. A lot of stuff we now try to push away from JS as much as possible, could, with a low-latency/high-performance way to do vector math be brought back to JS where it belongs.

It is also important if we want to be able to allow the ĎWeb Array Mathí API inside of a ParallelArray/RiverTrail[2] context, so we can utilize that synergy to allow for new stuff we didnít even consider when discussing this. This could be especially important for WebGL/WebCL where you might want to pre-process something in parallel before uploading to a coprocessor.

The last type of technologies that weíll have to interact with is the Ďemscriptení-class of software, and how we can build the API so that it is accessible for that sort of software. Iím sure even a fast memcpy could help accelerate that kind of code, and if we can interact with asm.js in some meaningful way, thatís an advantage. (But for the record, I dislike the idea behind asm.js as much as I dislike NaCl)

> Considering the design of the Web Audio API, where the data is made available in a typed array on the JavaScript heap and come in bursts of a few hundred samples at a time, the most viable option is to do all the processing on the CPU. On the other hand, using WebGL/GLSL or other GPU-based APIs, such as WebCL, would quite likely fail to meet the latency requirements.

They would fail in general, WebCL can run on the CPU or a dedicated audio DSP, but that are not use-cases that Web Audio supports yet. You would probably need some WebCL-node or something, but I doubt that will be supported in a reasonable time frame by DSPs.

> ...which leaves us with the CPU load. While modern JavaScript engines are quite impressive, they usually fail to utilize the instruction level parallelism provided by SIMD instructions, which becomes the most important missing link for achieving performance levels in JavaScript on par with hand crafted native code (e.g. C++ with SIMD intrinsics).

And here we come to the magic SIMD word, which I think is key to the debate here. Every processor that we run JS on today supports SIMD, yet most donít support SIMD in a way that JS can take advantage of. (ARMv7 doesnít support double-precision for example)

I think the second most important thing (and something that the specs hack around all the time) is the lack of other numerical datatypes than double, which is also something that the DSP API battled with. This is also something weíll have to work with, we probably cannot fix it once and for all, but we should keep it in mind.

I think the DSP API was amazing for what it was designed for, Audio DSP in the context of Web Audio, but once we expand the scope, parts of the design doesnít make as much sense anymore. Iím mainly thinking about the section Numerical Accuracy (4), the methods related to complex numbers, the methods starting with `sample`, `sum`, `pack`, `unpack`. (Please tell me if you disagree)

When we donít know what the user is trying to do anymore, numerical accuracy is paramount. Games for example could depend on IEEE 754 behavior for synchronized physics. IEEE 754 is designed to minimize the errors for people ignorant about numerics, and this is behavior I think we should try to keep. We donít want the API to behave weirdly across browsers if we donít know what it will be used for.

> Based on those conditions, I tried to come up with a fairly minimal API that would be easy to implement in a Web client, yet bring cross platform SIMD capabilities to the Web platform. The result after a few iterations was an API that I called the "DSP API", which later matured into what is now called the "Web Array Math API".

I think we have two real options.

The low-risk version is to write a version of the `ArrayMath` part of the DSP API that supports all the useful data-types that we want to support. This could probably be done in a short amount of time and be reasonably non-controversial.

The high-risk version is to write a short vector API (think raw SIMD) that you can use to build the API above easily. This is probably controversial and a bit more complex, since the interaction with the JS engines are at a much lower level. On the other hand, this would be the holy grail of JS performance.

A short vector API could be like ecmascript_simd[3] or the spec[4], with an API that is designed to map down to hardware _on multiple platforms_. ARM is winning one battle, x86 one and MIPS likely another.

Thereís technically a third route that I donít want to throw out of the window without telling you all about it. Iíll just call it the ĎNumPy/Matlabí route, and would essentially mean to implement a similar API in JS. It is by far the hardest route, solves a different problem, but if we want a high-level API, you really can optimize the shit out of software like that. If we wanted to do actual high-performance numerics, this would be the obvious way to go, but I donít think we actually want that.

> While the API was designed with audio signal processing in mind, it should of course be useful for other things too. For instance, you can check some early usage examples (non-accelerated when you use the JS polyfill).

Demos wins the world, I think it is important that we demo early, demo often.

> Now, after some proof-of-concept testing the time has come to involve more people and take the work forward. ...which is why I created this groups.
> I suspect that the first things we'll try to tackle in this group (apart from practical & formal issues) are the high level aspects of the proposed API (such as its scope, its general design, use cases and its merits and flaws compared to other similar technologies), and provided that we reach some sort of consensus we'll move on to lower level aspects of the API (such as interface design, missing/superfluous methods, precision requirements, testability, etc).

I think my stream-of-conciousness covered a few of those points.

Jens Nockert


Received on Saturday, 23 November 2013 17:52:58 UTC