Re: Questioning the current direction of the Web Audio API

+alex, as TAG representative; feel free to add others from the TAG if
appropriate.  The head of this thread is at

I've been holding off responding to this thread, but it seems to be growing
a bit out of control.  I'd ask that the invective against the current
design be calmed down (reinventing the wheel, for example - the current
design does not reinvent the wheel, nor does it reinvent the concepts
behind the current nodes).

We have a significant challenge getting low-latency audio in Javascript as
it is today.  Javascript in the main thread is, I will postulate, simply
not a good place for doing audio - you can lump all of its shortcomings
under "quite big latency" and presume they'll just be "fixed" at some point
- but as any experienced developer will know, it's quite hard to add
performance later, and in this case I believe you're missing the point; the
Web Audio API, in its current design, was designed to make easy things
wickedly efficient, moderately hard things fairly simple, and very advanced
things possible.

In that vein, we do have a significant issue that we need to fix -
ScriptProcessorNodes in the main thread is simply not a good escape valve.
 To fix their shortcomings, they need to be placed in a thread that can
have restricted interruptions (i.e. not be delayed by main thread
processing, and preferably have very limited garbage collection).  In the
main thread, developers can typically presume they're doing a good job if
they keep their visual processing under 16ms or so (as long as they hit
that 16.7ms frame rate update so they have 60fps).  For audio, I believe a
16ms latency is simply unacceptable; and certainly, at the very least, we
would need tools for developers to profile and debug this, similar to the
frame timeline tools we have today for debugging visual refresh.

I've said - actually, since I joined the project - that we needed to have
script processing nodes that were not in the main thread.  I've also said
that I am a very strong supporter of the current design, and having a broad
toolbox available in an easy-to-use, efficient package, and have repeatedly
said that the built in nodes have an immense amount of powerr.  I am very
strongly AGAINST any design that requires me, for example, to implement my
own FFT inputs in order to do basic filtering, or process every sample
myself to do gain control, or that requires me to handle playing back an
audio buffer myself one processing frame at a time.  That doesn't mean I
don't think you should be able to go beyond those capabilities efficiently
- but I think anyone who is saying "you must be able to efficiently mix all
types of nodes, including script nodes, without any latency penalties"
hasn't thought through how to implement such a framework - or more to the
point, if such a thing is possible.  Certainly, if you want a custom
framework, you can customize each part of it - and you can, in fact, simply
have a ScriptProcessorNode connected to the audioContext.destination, and
do everything inside that node.

Clearly, I'm a strong believer in the power of the current API - I refer
you to the vocoder I wrote in Web Audio a year and a half ago (, as well as other
demos on that site - but at the same time, I recognized the processing
capabilities that are NOT in the current design (pulse width modulation;
osc sync; expansion/noise gates; sidechain compression; ...).  At the same
time, suggesting that all you can get with the current nodes is "a couple
of effects" is seriously missing the power of chaining these nodes
together, and audio-rate AudioParams.

My dislike for the Media Streams Processing proposal was not its
incorporation of JS; it was that it RELIED on JS to do even basic
processing like gain, and in order to do pretty much anything in it, I
would have to be writing a lot of script myself, or including reverb.js,
filter.js, oscillator.js, etc in most of my projects. I found that very
inefficient (and the reliance on main-thread JS, as I recall, but it's been
a while).  At any rate, I believe it should be efficient to include JS
processing; it is not today, but then again, we're not "done".


On Fri, Oct 18, 2013 at 9:38 AM, Armagan Amcalar <>wrote:

> Hello,
> I would also like to introduce myself and join the conversation. I'm the
> creator of pedalboard.js <>, the
> open source guitar effects framework. I have to say that I'm amazed at what
> Web Audio API can do. pedalboard.js is entirely based on current AudioNodes
> and I can say that it's fairly sufficient.
> I'm also using it alongside with WebRTC at for online jamming.
> Actually I'm working on a product just like the one defined on *2.3
> Online music production tool*. I can say that for most of the
> functionality, AudioNodes are efficient. Yeah, for "generative" music
> production the API is kind of raw. But then there are effects and routing
> and it's another important part of the API, which I can say is handled very
> well. Therefore there are huge opportunities and application possibilities
> with the current functionality and I can just say it will get better.
> Cheers,
> Armagan
> On Oct 18, 2013, at 6:10 PM, Hongchan Choi <> wrote:
> Hello All,
> I am Hongchan, the author of 'cruelly' lacking and 'ugly' WAAX. Since my
> work is brought up in the topic, I guess I have to defend myself somehow.
> I went down the same path with OP at some point, that was simply because I
> have been a computer musician myself over a decade. It is all about
> experiments and I am very well aware of that.
>  First two revisions of the library were completely based on
> ScriptProcessorNode - had to dump them all because they were not usable in
> the real-world production. That was the moment I changed the goal and the
> design; *something that runs without glitches.*
> Now I have been working on this API for a while (even with Chris Rogers
> himself), I just can't say everything has been failed. I would say this API
> is built for the production. On that note, it is just not as useful as PD,
> SC, or ChucK for the experimental purposes.
> Currently I am refining the latest revision (r13) of WAAX<>and we (me and Chris Rogers) had put some ideas into it in order to
> implement essential building blocks solely based on the native nodes by
> utilizing Web Audio API in different ways. This is not public yet, and
> hopefully I can wrap up the long-overdue documentation.
> *I am certain that there are many things we can achieve on top of the
> current design of Web Audio API*. I found the majority of web audio
> projects overlooks the countless possibility. Nonetheless, I cannot say OP
> is wrong. I had the same complaints and rants once, but I just decided to
> look at the other side.
> While I am at it, I would like to thank all the people in the audio
> working group. This is a fantastic work!
> Best,
> Hongchan
> On Fri, Oct 18, 2013 at 7:01 AM, s p <> wrote:
>> Answer from Chris Lowis :
>>  Hi Sebastien, Thank you very much for getting in touch, it's great to
>> hear from computer musicians and to learn more about your requirements.
>> I'll reply in-line here, but perhaps we could continue the discussion as a
>> group on
>> > ry similar paradigm). It turned out to be pretty much impossible. For a
>> simple reason is that Web Audio API really lacks objects, so I would have
>> to implement most of them using **ScriptProcessorNodes**, and then loose
>> all the benefits of using Web Audio API (all dsp in one ScriptProcessorNode
>> would be faster).
>> Could you clarify what you mean by "objects"? Do you mean node types, and
>> in particular one-to-one mapping to existing nodes within PD - or are you
>> talking about a JavaScript "object" layer on top of Web Audio?
>> > The only stab - that I know of - at implementing some serious sound
>> programming library on top of other WAA nodes is [waax](
>> But it cruelly lacks objects, and uses a
>> couple of [ugly hacks](
>> ).
>> I could do with a clarification of "objects" again here, just to help
>> understand what you mean.
>> > I love the idea of Web Audio API. But right now I feel that it really
>> lacks prespective, and a clear direction.
>> I think it's fair to say that the Web Audio API targets, at least in the
>> initial "version 1" form common use cases on the web where previously one
>> may have used Flash, plugins or hacks around the <audio> element. Having
>> said that, there has been a large amount of interest from the computer
>> music community in the API, and there is certainly a lot of interest in
>> developing more in this direction.
>> > I'd really like to hear people's opinion about why they do it like
>> that, how and why they think it can/will be used for real-life
>> applications, because the goals stated in the draft are - in my humble
>> opinion - completely unrealistic with the current functionalities.
>> Our Use Cases document gives a good idea of the kind of real-life
>> applications we are targetting:
>> > I am sorry to be a bit harsh, and question this project in its
>> foundations, but I suppose that's what you get for being implied in open
>> standards : any random angry guy out there can come and complain :)
>> Not at all, speaking personally I think what you are doing is fascinating
>> and something I hope more people will attempt using the API in the future.
>> Please keep the discussion going! Cheers, Chris
> --
> Hongchan Choi
> PhD Candidate, Research Assistant
> Center for Computer Research in Music and Acoustics (CCRMA)
> Stanford University

Received on Monday, 21 October 2013 16:20:04 UTC