Questioning the current direction of the Web Audio API

Re-posting from (https://github.com/WebAudio/web-audio-api/issues/263) to
start a discussion :)

 So ... where to start?

First, short introduction, as probably nobody knows me here. My name is
Sebastien Piquemal, I am a web developer and a musician. I've been studying
(and playing/composing) computer music for a few years. Which means that I
am a heavy SuperCollider <http://supercollider.sourceforge.net/> and Pure
Data <http://puredata.info/> user (those are the 2 most used languages for
audio programming).
I am also the maintainer of the WebPd
<https://github.com/sebpiq/WebPd>library (Pure Data in JavaScript) and
more recently I started to write a
node.js <https://github.com/sebpiq/node-web-audio-api> implementation of
Web Audio API.
This message is kind of a rant, as I have been increasingly frustrated with
trying to use Web Audio API for doing musical experiments. I am sorry about
that, I hope nobody will feel personally attacked.

The previous version of WebPd (written about 2 years ago), used only custom
dsp. So last spring, I thought "why not using as much of web audio as
possible, to get a better performance" ? So I started trying to map Pure
Data objects to Web Audio API objects (as both Pd and WAA are dataflow, so
they use a very similar paradigm). It turned out to be pretty much
impossible. For a simple reason is that Web Audio API really lacks objects,
so I would have to implement most of them using *ScriptProcessorNodes*, and
then loose all the benefits of using Web Audio API (all dsp in one
ScriptProcessorNode would be faster).

As a matter of fact, there is a few good libraries out there, all written
post-web audio API (I won't mention other good libraries written before
WAA) for audio programming in JavaScript :

https://github.com/colinbdclark/flocking
https://github.com/charlieroberts/Gibber
https://ccrma.stanford.edu/~mborins/420b/#/3

Funny fact is that all of them use the ScriptProcessorNode.
The only stab - that I know of - at implementing some serious sound
programming library on top of other WAA nodes is
waax<https://github.com/hoch/waax>.
But it cruelly lacks objects, and uses a couple of ugly
hacks<https://github.com/hoch/WAAX/blob/master/src/units/generators/Noise.js#L14>
.

In my opinion web audio API is very deceiving, and a lot of people seem to
think that they can implement anything they have on a desktop with the
existing nodes *not using ScriptProcessorNode, which is tagged as evil*.
In fact, the draft identifies use cases which basically covers most audio
applications (audio production and composition, artistic audio exploration,
games ...).
However ... artistic exploration, no way. Audio production ... certainly
not for realtime applications. For example, no way you could implement
something eve close to Ableton Live. Games, yeah ... you could do some old
fashioned "load your sound assets, and apply a couple of filters to them",
but no way you could do some generative audio using some more advanced
synthesis techniques. You probably even couldn't implement something like
FMOD, unless you pre-render most of the assets.

I love the idea of Web Audio API. But right now I feel that it really lacks
prespective, and a clear direction.
I'd really like to hear people's opinion about why they do it like that,
how and why they think it can/will be used for real-life applications,
because the goals stated in the draft are - in my humble opinion -
completely unrealistic with the current functionalities.

I am sorry to be a bit harsh, and question this project in its foundations,
but I suppose that's what you get for being implied in open standards : any
random angry guy out there can come and complain :)

-- 
*Sébastien Piquemal
*
*
** *-----* @sebpiq*
 -----* *http://github.com/sebpiq*
*
 ----- http://funktion.fm

Received on Friday, 18 October 2013 13:55:16 UTC