Re: Web Audio API Proposal

Hi Robert,

Thanks for your comments.  First of all I want to make it clear that I think
direct processing in JS should be part of the API.  I think some of the
demos you guys have created are very cool.  But, I don't think that JS
processing alone can handle all of the use cases we should cover.  For
graphics, we could have taken the same approach you're now proposing for
audio.  Instead of the rich set of compositing APIs in canvas 2D and WebGL,
we could have settled for trying to render animated content at 60fps using a
very simple API: context.putImageData().  Every single pixel for every
animated frame would be generated exclusively in javascript.  That approach
isn't tenable.  Including a richer set of higher-level graphics APIs in
canvas and WebGL is important not only for the obvious performance issues,
but also because the APIs are higher-level and more directly useful to the
javascript developer.  Similarly, for audio we face the same types of
performance issues,
 but equally important is the need for a higher-level API.

On Mon, Jun 14, 2010 at 8:27 PM, Robert O'Callahan <robert@ocallahan.org>wrote:

> That API looks extremely complicated. It looks like it will be a huge
> amount of work to get a precise spec with interoperability across diverse
> implementations.
>

>From the javascript programmer's point of view, I think the API is fairly
easy to use and does not require much complexity.
If you take a look at the javascript source for my demos:
http://chromium.googlecode.com/svn/trunk/samples/audio/index.html
It takes relatively little code to get some fairly interesting results.
 Assuming for the moment that the DSP for these demos could be coded
directly in javascript, I think it would quickly become clear how much *more
* complex the code would become for the average javascript developer to
follow.  I'm very confident that the API I've proposed can be specified in a
tight and rigorous way.  In its current form, the specification isn't
complete, but it can be made so.


>
> Dave Humphrey's proposed API ( https://wiki.mozilla.org/Audio_Data_API )
> is far simpler because it leaves almost all audio processing to JS. This
> "Web Audio API" proposal has a section "Javascript Issues with real-time
> Processing and Synthesis:" which lists several problems, but they boil down
> to two underlying issues:
> 1) JS is slower than native code.
> 2) Processing audio on a Web page's "main thread" has latency risks.
>
> Issue #2 can be addressed by extending the Audio Data API so it can be used
> from Web Workers.
>

 I don't believe that simply moving the JS processing to a web worker is
going to solve all of the glitching and latency issues.



> For issue #1, there is experimental data showing that many kinds of effects
> can be done "fast enough" in JS. See the Audio Data API demos, and
> https://bugzilla.mozilla.org/show_bug.cgi?id=490705#c49 for some
> performance numbers. Certainly there's still a performance gap between
> current JS implementations and hand-vectorized code, but it seems to me more
> profitable to work on addressing that gap directly (e.g. improving JS
> implementations, or adding vector primitives to JS, or providing a library
> of standard signal processing routines that work on WebGLArrays, or even
> NaCl/PNaCl) than hardcoding a ton of audio-specific functionality behind a
> complex API. The latter approach will be a lot more work, not reusable
> beyond audio, and always limited, as people find they need specific effects
> that aren't yet supported in the spec or in the browser(s) they want to
> deploy on.
>
> Rob


Some effects *can* be made fast enough.  But some very common cases can't be
made fast enough and the glitching/latency and scalability problems are
extremely worrisome.

Once again, I'm proposing that we include direct javascript processing, but
not as the exclusive way of generating and processing audio on the web.

Best Regards,
Chris Rogers

Received on Tuesday, 15 June 2010 19:49:36 UTC