Re: Web Audio API Proposal

On Jun 21, 2010, at 4:47 PM, Jer Noble wrote:

> 
> On Jun 21, 2010, at 3:27 PM, Chris Marrin wrote:
> 
>> On Jun 21, 2010, at 2:34 PM, Chris Rogers wrote:
>> 
>>> Hi Chris,
>>> 
>>> I'm not sure we can also get rid of the AudioGainNode and integrate the concept of gain directly into all AudioNodes.  This is because with the new model Jer is proposing we're connecting multiple outputs all to the same input, so we still need a way to access the individual gain amounts for each of the separate outputs.
>> 
>> Right, but if every node can control its output gain, then you just control it there, right? So if you route 3 AudioSourceNodes into one AudioNode (that you're using as a mixer) then you control the gain of each channel in the AudioSourceNodes, plus the master gain in the AudioNode. For such a common function as gain, it seems like this would simplify things. The default gain would be 0db which would short circuit the gain stage to avoid any overhead.
> 
> 
> Actually, I don't agree that modifying the output gain is so common an operation that it deserves being promoted into AudioNode.  Sure, it's going to be common, but setting a specific gain on every node in a graph doesn't seem very likely.   How many nodes will likely have a gain set on them?  1/2?  1/4?  I'd be willing to bet that a given graph will usually have as many gain operations as it has sources, and no more.  
> 
> I can also imagine a simple scenario where it makes things more complicated instead of less:
> 
> <PastedGraphic-1.tiff>
> 
> In this scenario, there's no way to change the gain of the Source 1 -> Reverb connection, independently of Source 2-> Reverb.  To do it, you would have to do the following:
> 
> <PastedGraphic-3.pdf>
> 
> And it seems very strange to have to create a generic AudioNode in order to modify a gain.  Alternatively, you could create multiple AudioReverbNodes, but again, it seems weird to have to create multiple reverb nodes just so you can change the gain going to only one of them.
> 
> Right now, every AudioNode subtype has a discreet operation which it performs on its input, and passes to its output.  To add in gain to every AudioNode subtype would make things more confusing, not less.

Ok, fair enough. My concern is that adding a gain stage will require extra buffering and extra passes through the samples. Do you think it will be practical for an implementation to optimize the gain calculation? For instance, I might have some software algorithm doing reverb. Since it's running through each sample, it would be easy for it to do a multiply while it's accessing the sample (either on the input or output side). If the reverb node knows it has a single input and that input is from a gain stage, it could do the gain calculation itself and avoid another pass through the data.

As long as optimizations like that are possible, I think having a separate AudioGainNode is reasonable.

-----
~Chris
cmarrin@apple.com

Received on Tuesday, 22 June 2010 23:20:48 UTC