Re: Web Audio API Proposal

Hi Chris,

I'm not sure we can also get rid of the AudioGainNode and integrate the
concept of gain directly into all AudioNodes.  This is because with the new
model Jer is proposing we're connecting multiple outputs all to the
*same *input,
so we still need a way to access the individual gain amounts for each of the
separate outputs.

Cheers,
Chris

On Mon, Jun 21, 2010 at 2:17 PM, Chris Marrin <cmarrin@apple.com> wrote:

>
> On Jun 21, 2010, at 12:27 PM, Chris Rogers wrote:
>
> > Jer Noble and I had a great discussion on Friday about his ideas.
> >
> > 1. Ownership
> > Jer is quite right that the concept of ownership is not necessary to
> expose in the javascript API.  We looked carefully at ways that an
> implementation could just "do the right thing" and I'm going to try to
> implement that.  This is a great simplification so I'll change it in the
> spec.
> >
> > 2. AudioMixerNode and AudioMixerInputNode
> > Another idea Jer brought up, which he explains in his email below, is to
> get rid of the AudioMixerNode and AudioMixerInputNode and replace it with an
> AudioGainNode.  This along with the idea of being able to connect multiple
> outputs to a single input makes things a lot cleaner.  I love this idea and
> will change the spec.
>
> I was thinking about this simplification the other day. It might be
> reasonable to put gain functionality right in the common AudioNode
> interface. That would make the most basic core API consiste of just
> AudioContext and AudioNode. with these two you can create a mixing board
> with separate gain controls on each input. Then you can derive a
> AudioScriptNode which would just add JavaScript code to process the audio.
> You'd still get gain and mixing for free, which lowers the overhead of the
> JS code.
>
> I think gain is a pretty fundamental part of any audio node, so this seems
> reasonable.
>
> We could then have levels of audio processing support, with the lowest
> level being nothing more than AudioContext, AudioNode and AudioScriptNode,
> along with AudioSourceNode and AudioBuffer to handle the input side. That's
> an awfully simple audio API, addressing some of the "complexity" concerns
> that have been raised. At higher levels of support you can add
> RealtimeAnalyzerNode, ConvolverNode/ImpulseResponse,
> AudioPannerNode/AudioListener, etc.
>
> Rather than levels we could borrow WebGL's concept of extensions. All
> implementations would support the core spec (AudioContext, AudioNode,
> AudioScriptNode, AudioSourceNode, AudioBuffer). Then you query the available
> extensions and request an extension object, which would contain the API
> needed for that extension. Extensions might be:
>
> - RealtimeAnalyzer
> - Convolution
> - 3D Audio
> - Filters (low pass, notch, etc.)
> - WebCL (someday?)
>
> Here's an example:
>
>        var context = new AudioContext();
>        var analyzerFactory = context.getExtension("RealtimeAnalyzer");
>        var analyzer = analyzerFactory.createRealtimeAnalyzer();
>        myAudio.audioSource.connect(analyzer);
>        ...
>        // process analyzer data as it comes in
>        ...
>
> -----
> ~Chris
> cmarrin@apple.com
>
>
>
>
>

Received on Monday, 21 June 2010 21:35:17 UTC