Re: Music Synthesis

On Jun 16, 2010, at 3:30 PM, Robert O'Callahan wrote:

> On Thu, Jun 17, 2010 at 3:36 AM, Chris Marrin <cmarrin@apple.com> wrote:
> Chris' proposal is node based. As such, it can easily be optimized for a given platform. The Mozilla proposal simply provides audio samples which can be used by JavaScript and then played (or not in the case of audio visualizers). It sounds like you're advocating an API to go with the Mozilla proposal. That can be optimized too, but it's not currently being proposed. If it were, it would be an SVG (Chris' proposal) vs Canvas (Mozilla's proposal with an API) discussion. Either way, I don't believe access to the audio samples with pure JavaScript processing is sufficient.
> 
> So if we had an WebGLArray (or whatever it gets renamed to in ES5) processing library amenable to parallel/vector/GPU optimization, and a way of delivering samples to a Worker for processing to avoid main-thread latency, do you have any fundamental objection to that kind of approach?

I think Chris' proposal can be thought of as that processing library. I'm familiar with Canvas 2D and WebGL, so I'll phrase it in those terms. Both of those libraries start with a Context, which contains the API for the interface. Some parts of that API perform operations (like strokePath or drawElements), others setup state (like translate or activeTexture) and others create objects that take part in the API (like createPattern or createTexture).

The needs of an audio API are somewhat different from those of a drawing API, but all those components are present in Chris' proposal. We create a context, setup state, and create sub-objects to handle the details of the "rendering". I think the biggest difference between the audio API and the Canvas API's is that the latter are immediate mode (drawing is done when the call is made), but the former sets up the graph and audio is processed as it comes in. 

If we can ignore the immediate mode vs. implicit processing issue for a moment then it seems like we could start with the Context/State/Subobject/Rendering design, and pour as much or as little as needed into it. Does that sound like a reasonable starting point?

-----
~Chris
cmarrin@apple.com

Received on Wednesday, 16 June 2010 23:37:51 UTC