Re: Web Audio API Proposal

Thanks, Chris.  Obviously you've built a cool and flexible system and should be proud of it.

Will look at the sources of the various demos when I get a chance.   Is the selection of demos meant to be representative of the most typical web audio use cases that we're trying to provision?  Or to demonstrate the functional range of the proposed API?

IMHO it's probably a bit premature to settle on any particular design yet as basis for the standard.  Probably best to see what use cases get consensus support first, and only then evaluate proposals (or create new design[s]) against the requirements.  Per the previous discussions here, I would expect a tension to arise at some point between implementation complexity and portability, which we'll have to work out somehow.  But we're not there yet, I don't think.

	-- Chris G.


On 2010Jun 21, at 7:54 p, Chris Rogers wrote:
> Hi Chris,
> 
> This is a rich topic with plenty of nuance to it, so there's plenty to discuss.  I'm not sure if you've already looked through the javascript source (view source) for the examples/demos I made:
> 
> http://chromium.googlecode.com/svn/trunk/samples/audio/index.html
> 
> The examples cover a number of different use cases and demonstrate (I hope) that the API is not really that difficult to use.  The number of lines of javascript is actually fairly small.  If you prefer to think in terms of mixers, sends, and insert effects then a very small javascript wrapper library is all that is needed.  I originally had an API much more like what you're describing, but switched to a more modular approach, incorporating some good ideas from my conversations with the Apple folks.  At the point where I had to translate my demos to the new API, I was surprised by how little the amount of javascript changed.  I believe the extra possibilities offered by the modular approach are are well worth it.
> 
> Cheers,
> Chris
> 
> On Mon, Jun 21, 2010 at 6:30 PM, Chris Grigg <chris@chrisgrigg.org> wrote:
> Small observation on this conversation - I would think the functionality those kinds of graphs (multiple sources with w/reverb & chorus) give might be simpler to achieve and control with a mixer-and-effects-send model, as opposed to a free-form-graph model.  Especially if you also go with the implied-mixer model you guys have been discussing today. Then source objects could have N effect send settings, in addition to the gain setting you settled on; and you'd select the particular effect associated with each send (i.e. chorus, reverb, etc.) on whatever object presents for the implied mixer -- probably the output, or the audio context.  Not that there isn't a good argument for also providing insert FX -- there certainly is, both in-line with a source and in-line with a mix output (for compression, EQ, etc.).
> 
> General comment - I guess while I've found fitting real-world music/sound applications into DirectShow-like filter graph models to be flexible and doable, it's also perhaps a bit more fiddly than strictly necessary for the developer/customer.  The flexibility might be considered overkill in many of the most common use cases.  We can explore that in detail later if anyone wants.
> 
>        -- Chris G.
> 
> 
> On 2010Jun 21, at 4:47 p, Jer Noble wrote:
> >
> > On Jun 21, 2010, at 3:27 PM, Chris Marrin wrote:
> >
> >> On Jun 21, 2010, at 2:34 PM, Chris Rogers wrote:
> >>
> >>> Hi Chris,
> >>>
> >>> I'm not sure we can also get rid of the AudioGainNode and integrate the concept of gain directly into all AudioNodes.  This is because with the new model Jer is proposing we're connecting multiple outputs all to the same input, so we still need a way to access the individual gain amounts for each of the separate outputs.
> >>
> >> Right, but if every node can control its output gain, then you just control it there, right? So if you route 3 AudioSourceNodes into one AudioNode (that you're using as a mixer) then you control the gain of each channel in the AudioSourceNodes, plus the master gain in the AudioNode. For such a common function as gain, it seems like this would simplify things. The default gain would be 0db which would short circuit the gain stage to avoid any overhead.
> >
> >
> > Actually, I don't agree that modifying the output gain is so common an operation that it deserves being promoted into AudioNode.  Sure, it's going to be common, but setting a specific gain on every node in a graph doesn't seem very likely.   How many nodes will likely have a gain set on them?  1/2?  1/4?  I'd be willing to bet that a given graph will usually have as many gain operations as it has sources, and no more.
> >
> > I can also imagine a simple scenario where it makes things more complicated instead of less:
> >
> > <PastedGraphic-1.tiff>
> >
> > In this scenario, there's no way to change the gain of the Source 1 -> Reverb connection, independently of Source 2-> Reverb.  To do it, you would have to do the following:
> >
> > <PastedGraphic-3.pdf>
> >
> > And it seems very strange to have to create a generic AudioNode in order to modify a gain.  Alternatively, you could create multiple AudioReverbNodes, but again, it seems weird to have to create multiple reverb nodes just so you can change the gain going to only one of them.
> >
> > Right now, every AudioNode subtype has a discreet operation which it performs on its input, and passes to its output.  To add in gain to every AudioNode subtype would make things more confusing, not less.
> >
> > -Jer
> 
> 
> 

Received on Tuesday, 22 June 2010 19:19:43 UTC