W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2012

Re: Aiding early implementations of the web audio API

From: Marcus Geelnard <mage@opera.com>
Date: Mon, 21 May 2012 13:44:24 +0200
To: public-audio@w3.org, "Alistair MacDonald" <al@signedon.com>
Message-ID: <op.wenpkahjm77heq@mage-desktop>
Den 2012-05-16 20:23:04 skrev Alistair MacDonald <al@signedon.com>:

> We want to get a good idea of any major issues that may block/prevent
> vendors from implementing the spec.
> Naturally everyone has different ideals and preferences on various
> details of the spec, which we will continue to work together to unify
> as we go forward.
> This thread however is specifically to take a higher-level look at
> things that could potentially be show-stoppers, or serious
> pressure-points with regards the implementation.
> I want to try and get a clearer picture of what implementation might
> look like across the board, and want to encourage as many vendors to
> chime in as possible.
> I will add this information to the Wiki and we will categorize and
> prioritize this information together on our call next week.

As it stands, the specification is not implementable (there is too much  
information missing). When the issues that we have filed have been  
resolved, the situation should be much clearer, and we should be able to  
get a better view of the situation.

However, on a higher level, I think there are some issues with the spec  
that may make it problematic from a standardization point of view (i.e. it  
may not be the best solution for the Web, at least not in its current  

On the positive side, I think that the API covers the most important use  
cases quite nicely:

• The AudioBufferSourceNode and AudioGainNode nodes should satisfy the  
needs for most casual games and interactive apps (which I suppose is the  
major use case, after all).

• The AudioPannerNode and AudioListenerNode nodes should take care of most  
of the needs for advanced 3D games and the likes (in a very elegant,  
minimalistic fashion, IMO).

• The RealtimeAnalyzerNode node should be enough for music visualizers,  
and more (it's a good thing that you can probe any part of the graph, not  
just the AudioDestinationNode, for instance). See  
https://www.w3.org/2011/audio/track/issues/74 for our suggested  

However, a large part of the API seems to come into good use mostly for  
"Musical Applications". Furthermore, sound generation in musical  
applications (and music authoring tools in particular) revolve largely  
around two concepts:

• Custom mixing and effect implementations (e.g. Buzz [1], ReNosie [2],  
ReBirth [3]).
• Extensive use of plugins (VST, AU, LADSPA, DSSI etc).

This leads me to believe that the JavaScript processing node will be very  
important (for implementing custom effects and instruments, and possibly  
even for creating effect libraries), while some native nodes (such as the  
Oscillator, BiquadFilterNode, DynamicsCompressorNode and DelayNode nodes)  
will not be used as much.

Thus I think that the API should focus on good support for JavaScript  
based processing. Ideally, you should be able to re-implement all the  
native nodes as JavaScriptAudioNodes, and the native nodes would be  
"performance helpers" for the most common/heavy operations. This would  
also be great when creating test suites. (To make it universally useful,  
it should be possible to have an arbitrary number of AudioParam objects on  
an JavaScriptAudioNode. Otherwise it would not be possible to implement a  
custom filter node with the same interface as the BiquadFilterNode, for  

One thing that seems absolutely necessary is to move the processing out  
 from the main context, since there can never be any concurrency there.  
Instead, each JavaScript processing node should execute in its own context  
(for instance, in a Worker). Not only would this make it possible to have  
the processing done concurrently with the main context (e.g. avoiding  
stalls due to long running scripts), but it would also make it possible to  
have concurrency between audio nodes (i.e. utilize multiple CPU cores for  
JavaScript processing).


[1] http://www.buzzmachines.com/whatisbuzz.php
[2] http://www.renoise.com/
[3] http://www.rebirthmuseum.com/
Received on Monday, 21 May 2012 11:45:20 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:49:59 UTC