- From: Chris Rogers <crogers@google.com>
- Date: Tue, 15 Jun 2010 17:20:17 -0700
- To: robert@ocallahan.org
- Cc: Chris Marrin <cmarrin@apple.com>, public-xg-audio@w3.org
- Message-ID: <AANLkTinLJ5z6v6YkVsLRzkgpCyv8nExmWGaOwpk1RjHM@mail.gmail.com>
On Tue, Jun 15, 2010 at 4:38 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > On Wed, Jun 16, 2010 at 11:01 AM, Chris Marrin <cmarrin@apple.com> wrote: > >> It's an interesting idea to imagine an "audio processing language" for use >> here. But I don't think JavaScript is the appropriate candidate for such a >> language. OpenCL would do a good job of audio processing. Someday perhaps >> WebGL shaders will take the place of SVG filters. And perhaps in the future >> we will have WebCL, which can bring OpenCL capabilities to the browser. At >> that point, "programmable audio processing" might be possible. Until then, I >> think we need a set of fixed function audio processing capabilities. >> > > Why not plan for that future now? Otherwise we'll end up saddled with a > vestigial fixed-function pipeline, much like OpenGL has now. > Personally, I'm not a big fan of the idea of using OpenCL for custom audio "shaders". That said, it's not hard to imagine a special kind of AudioNode which does OpenCL processing. Similarly, I'm imagining a special kind of AudioNode for javascript audio generation/processing. It could be mixed freely with the other AudioNodes. I think many of the Mozilla demos could be ported with very little API change to this type of environment. In fact, I've already started coding this in WebKit with a *very *experimental AudioNode subclass: JavascriptAudioSourceNode. It actually works and I've written a few simple test cases for it... The API I'm proposing *is* trying to accommodate for the future because of its modular nature. As different types of audio sources or processors are encountered, they can be added as AudioNodes without changing the nature of the underlying API. I think that only supporting javascript processing is more limiting. Many of the audio nodes described in Chris' proposal are similarly simple >> abstractions of existing audio functionality. >> > > OK, but is there any evidence that they're complete? What fraction of use > cases would be forced to resort to JS-based sample processing? > I'm not claiming that they're complete, only that they represent a good set of well-established building blocks sufficient for creating a good variety of web audio applications. I've created a number of demos to try to illustrate how the APIs could be used: http://chromium.googlecode.com/svn/trunk/samples/audio/index.html I used to work at Apple, where I was one of the designers of the *Audio Units *plugin architecture which is still used today. So, I'm acutely aware of how desirable it is to have an architecture where arbitrary audio code may be loaded and run. Loading arbitrary custom native audio code into the browser presents some fairly serious security implications. At Google, we're working on a technology called *Native Client* (NaCl for short) to try to address some of these issues, but that's a whole different topic. Chris Marrin has proposed using OpenCL for custom audio code which has its own set of challenges. Javascript processing is another alternative for some use cases. The modular API I'm proposing is trying to encompass all of these possibilities. I'm not trying to exclude javascript processing, just adding more possibilities. Cheers, Chris
Received on Wednesday, 16 June 2010 00:21:15 UTC