Re: Web Audio API Proposal

Hi,

Yes, making that JavaScriptProcessor node makes sense.

I still have a few questions left.

Question 1
-----
If you connect the audioSource of an <audio> element does that
audioSource disconnect itself from the default AudioDestination?
>From my point of view there are two clear use cases to use the
audioSource from an <audio> element.
1) to filter or apply some effect to that audio and directly output it
and therefore muting the original audio
2) to analyze it and create some visualization therefore we still want
to play the original audio

For the filter use case I would modify a bit the code you posted with this one.

function setupFilter() {
    myJsFilter = context.createJavaScriptProcessor();
    myJsFilter.onprocess = process;
    var audio = document.getElementById('audioElement');
    audio.audioSource.disconnect();  // Here we disconnected from the
default destination it used to have
    audio.audioSource.connect(myJsFilter);
    myJsFilter.connect(context.destination);
}

// the process code stays the same

For the visualization use case, the disconnect() method wouldn't be necessary:
function setupAnalysis() {
    myAudioAnalysis = context.createJavaScriptProcessor();
    myAudioAnalysis.onprocess = process;
    var audio = document.getElementById('audioElement');
    audio.audioSource.connect(myAudioAnalysis);
    myAudioAnalysis.connect(context.destination);
}

// in this case the process() method would perform an FFT, some band
processing and smoothing in JS .

There is one thing question related to this is whether the volume
control that the <audio> elements have by default would modify the
audioSource gain or the default audioDestination.  I think it would
make more sense to modify the audioSource gain, because like that if
the user modifies the volume control in the filter use case, this
would work as expected (modifying the volume of the audio that we are
listening).

Question 2
-----
Does the audioSource element have a sampleRate, bufferLength,
channelCount?  This way we could setup up our filter once before
connecting the audioSource to it and then let it run.  These
sampleRate, bufferLength and channelCount attributes depend on the
actual audio stream/file, therefore they could change if we change the
url of the <audio> element by code.  Therefore the audio element
should have an event notifying about changes in those attributes.  I
think in the Mozilla's current API there is such an event called
loadedMetadata or something similar.

function setupFilter() {
    myJsFilter = context.createJavaScriptProcessor();
    myJsFilter.onprocess = process;
    var audio = document.getElementById('audioElement');
    audio.audioSource.disconnect();  // Here we disconnected from the
default destination it used to have
    audio.audioSource.onload = renconfigure;
    audio.audioSource.connect(myJsFilter);
    myJsFilter.connect(context.destination);
}

The other way to do this is if the event would contain this
properties.  This way would make me think that the audio stream can
change these properties at any moment each frame (a bit weird, but
maybe there is some use case I'm missing here).  But from the
developer point of view it would mean that it would have to check for
changes at each call to process and reconfigure the filter if it is
the case.

process(event) {
  if (event.sampleRate != currentSampleRate || event.channelCount !=
currentChannelCount || event.bufferLength != currentBufferLength) {
      reconfigureInternals();
  }

  // do the normal processing...
}

I would prefer the first method.  To me it feels more intuitive and clean...

Question 3
-----
How many AudioDestinationNode instances can exist per page (DOM)?  One
per context? How many contexts can exist?  Can we connect audio
streams with different properties (sampleRate, bufferLength,
channelCount) to the same AudioDestinationNode instance?

For this one I don't have any opinions yet, just the question.

Thanks
ricard


On Tue, Jul 13, 2010 at 9:25 PM, Chris Rogers <crogers@google.com> wrote:
> Hi Corban,
> In the provisional specification I've been working on with Apple, an audio
> element has an "audioSource" attribute.  But an audio element is not also
> considered to be a destination.  The only destination is the AudioContext's
> destination.  Consider if you have multiple sources (multiple audio elements
> perhaps) but you want to create a mixer and apply an effect like reverb on
> all three sources.  Then each source can share the reverb effect and route
> the rendered audio to a single destination.
> Here's your example with a few changes:
> var context = new AudioContext();
> var lowpass = context.createLowPass2Filter();
> var audio = document.getElementById('audioElement');
> function setupAudioFilter() {
>   var source = audio.audioSource; // notice the audio element has this new
> attribute
>   source.connect(lowpass);
>   lowpass.connect(context.destination);
> }
> As soon as you start playing the audio element, it will be heard through the
> lowpass filter.
> But this isn't what you want for JavaScript processing, because this code
> will setup a routing graph which will do native processing (for example, the
> filter is run using native code).
> I think what you guys are interested in right now is how to do the actual
> DSP in JavaScript.  So here's what I would suggest for that.  I just made up
> the API for this with the idea that there would be a JavaScriptProcessorNode
> which invokes a callback function (called process() in this example).  I'm
> pretty sure this will work and can be a good starting point for the API, but
> we'll need to refine and perfect it.
> var context;
> var jsProcessor;
> function init() {
>     context = new AudioContext();
> }
> function setupJavascriptProcessing() {
>     jsProcessor = context.createJavaScriptProcessor();
>     jsProcessor.onprocess = process;
>     var audio = document.getElementById('audioElement');
>     audio.audioSource.connect(jsProcessor);
>     jsProcessor.connect(context.destination);
> }
> // This function gets called periodically to process a single buffer's worth
> of audio
> function process(event) {
>     // For this example, let's assume inputSamples and outputSamples are
> stereo interleaved, although I'd like
>     // to move to an API where these are non-interleaved.  This is a detail
> we can discuss later.
>     var inputSamples = event.inputSamples; // a Float32Array
>     var outputSamples = event.outputSamples; // a Float32Array
>     var n = event.numberOfSampleFrames; // number of sample-frames (for
> example 4096 left and right samples)
>
>     // DSP magic here where you would process n sample-frames from
> inputSamples -> outputSamples...
>     // We might need to have a commit() method (or something) here at the
> end - hopefully not though...
>     event.commit();
> }
> Let me know if this makes sense.
> Cheers,
> Chris
>
> On Tue, Jul 13, 2010 at 9:38 AM, Corban Brook <corbanbrook@gmail.com> wrote:
>>
>> Hello Chris,
>> Had another chance to go over your api today. I am going to be making a
>> javascript layer implementation of your spec which will work on top of the
>> mozilla audio data api.
>> This should allow us to review and quickly prototype new features or
>> changes on our working firefox implementation.
>> One question Richard brought up on IRC which I could not find an answer
>> for in your API is how do we add existing DOM audio elements to the graph?
>> An audio element is in essence a Source and Destination node, How would I
>> inject a lowpass filter into the pipeline? In the mozilla API we do this by
>> muting the audio element and then reading out frames, filtering and then
>> piping to a second non-DOM Audio element (Hacky, I know).
>> Here is a rough setup of how this might work, could you fill in the gaps
>> for me?
>>
>> var context = new AudioContext();
>>
>> var lowpass = context.createLowPass2Filter();
>>
>> var audio = document.getElementById('audioEle');
>>
>> function filterSound() {
>>
>>   var source = context.createAudioBuffer(audio); // perhaps passing in the
>> audio element here generates 1 frame worth of buffer as it plays ?
>>
>>   source.connect(lowpass);
>>
>>   lowpass.connect(context.destination);
>>
>> }
>>
>> Re,
>> Corban
>



-- 
ricard
http://twitter.com/ricardmp
http://www.ricardmarxer.com
http://www.caligraft.com

Received on Tuesday, 13 July 2010 21:11:49 UTC