Re: Web Audio API Proposal

Hi Ricard - good questions!  I'll try my best to answer:

On Tue, Jul 13, 2010 at 2:10 PM, Ricard Marxer Piñón <ricardmp@gmail.com>wrote:

> Hi,
>
> Yes, making that JavaScriptProcessor node makes sense.
>
> I still have a few questions left.
>
> Question 1
> -----
> If you connect the audioSource of an <audio> element does that
> audioSource disconnect itself from the default AudioDestination?
> From my point of view there are two clear use cases to use the
> audioSource from an <audio> element.
> 1) to filter or apply some effect to that audio and directly output it
> and therefore muting the original audio
> 2) to analyze it and create some visualization therefore we still want
> to play the original audio
>

I was thinking that the implicit connection to the "default"
AudioDestination would be broken as soon as it's connected into a "true"
processing graph.  This way we could avoid having to explicitly disconnect
it as you suggest.  Then, I think both cases (1) and (2) can be handled
identically.  For example:

In case (1) the JavaScriptProcessor applies some effect and the output is
connected to context.destination.  We hear the processed output.
In case (2) the JavaScriptProcessor does an FFT analysis to display
graphically, but also copies the input samples to the output samples, acting
as a "pass-through" processor as far as the audio stream is concerned.  We
then hear the original audio, but are now showing some cool graphics.  This
is how the native RealtimeAnalyserNode works.


> There is one thing question related to this is whether the volume
> control that the <audio> elements have by default would modify the
> audioSource gain or the default audioDestination.  I think it would
> make more sense to modify the audioSource gain, because like that if
> the user modifies the volume control in the filter use case, this
> would work as expected (modifying the volume of the audio that we are
> listening).
>

I was thinking that the volume control on the <audio> element would simply
be an "alias" for the audioSource gain.  Changing either one changes the
other.


> Question 2
> -----
> Does the audioSource element have a sampleRate, bufferLength,
> channelCount?  This way we could setup up our filter once before
> connecting the audioSource to it and then let it run.


Just a minor nitpick - the audioSource is not actually an element
(HTMLElement) but is just a regular JavaScript object.

*Sample Rate*
Right now in my specification document, all AudioNode objects have a
sampleRate, so in that case so would AudioElementSourceNode.  But I think we
should change this and consider another alternative which I think is
reasonable (and would highly recommend).  This is to consider that every
single node in the AudioContext is running at the same sample-rate.  This is
currently the case, covers almost all use cases I can think of, and avoids
trying to connect together nodes that are running at different rates (where
very bad things will happen!)  If we can make this assumption, then only the
AudioContext needs to have a sampleRate attribute.  Even though individual
audio elements may reference files which are at different sample rates, they
would always be converted (behind the scene) to the AudioContext sample rate
before we ever touch them.

In this case, the sampleRate would never change as far as the AudioNodes are
concerned since the stream always gets converted to the AudioContext
sampleRate.

*bufferLength*
In my specification document, there is no such thing as "bufferLength" for
an individual AudioNode.  The "event" passed back to the process() method
has a "numberOfSampleFrames" attribute which is the same thing as what
you're talking about I think.  This value could be an argument
to createJavaScriptProcessor, so we'll know it ahead of time.  From then on,
it could be guaranteed to never change, so we don't need a notification.

*channelCount*
The number of input channels could change (for example, from mono to
stereo).

So only the channelCount will make a difference to the JavaScriptProcessor,
but your question still remains.  Should we have a event notification for
such a change or simply require the processor to deal with it on-the-fly.
 I'm open to either possibility.



> Question 3
> -----
> How many AudioDestinationNode instances can exist per page (DOM)?  One
> per context? How many contexts can exist?  Can we connect audio
> streams with different properties (sampleRate, bufferLength,
> channelCount) to the same AudioDestinationNode instance?
>
> For this one I don't have any opinions yet, just the question.

I'm considering a single AudioDestinationNode per AudioContext.  This is the
"destination" attribute.  It's probably unnecessary to have more than one
AudioContext per document since everything can be routed and mixed using
just one.  But we could consider allowing more than one.  If we only allow
one, then I suppose we'd have to throw an exception (or something) if more
than one were created...

Best Regards,
Chris

Received on Tuesday, 13 July 2010 22:33:38 UTC