W3C home > Mailing lists > Public > public-xg-audio@w3.org > July 2010

Re: Web Audio API Proposal

From: Chris Rogers <crogers@google.com>
Date: Tue, 13 Jul 2010 12:25:40 -0700
Message-ID: <AANLkTikjkvR442wLNnuIXMQXfbEBLS6yGqGv8iaKFK2V@mail.gmail.com>
To: Corban Brook <corbanbrook@gmail.com>
Cc: Ricard Marxer Pin <ricardmp@gmail.com>, public-xg-audio@w3.org
Hi Corban,

In the provisional specification I've been working on with Apple, an audio
element has an "audioSource" attribute.  But an audio element is not also
considered to be a destination.  The only destination is the AudioContext's
destination.  Consider if you have multiple sources (multiple audio elements
perhaps) but you want to create a mixer and apply an effect like reverb on
all three sources.  Then each source can share the reverb effect and route
the rendered audio to a single destination.

Here's your example with a few changes:

var context = new AudioContext();
var lowpass = context.createLowPass2Filter();
var audio = document.getElementById('audioElement');

function setupAudioFilter() {
  var source = audio.audioSource; // notice the audio element has this new
attribute
  source.connect(lowpass);
  lowpass.connect(context.destination);
}

As soon as you start playing the audio element, it will be heard through the
lowpass filter.

But this isn't what you want for JavaScript processing, because this code
will setup a routing graph which will do native processing (for example, the
filter is run using native code).

I think what you guys are interested in right now is how to do the actual
DSP in JavaScript.  So here's what I would suggest for that.  I just made up
the API for this with the idea that there would be a JavaScriptProcessorNode
which invokes a callback function (called process() in this example).  I'm
pretty sure this will work and can be a good starting point for the API, but
we'll need to refine and perfect it.

var context;
var jsProcessor;

function init() {
    context = new AudioContext();
}

function setupJavascriptProcessing() {
    jsProcessor = context.createJavaScriptProcessor();
    jsProcessor.onprocess = process;

    var audio = document.getElementById('audioElement');
    audio.audioSource.connect(jsProcessor);

    jsProcessor.connect(context.destination);
}

// This function gets called periodically to process a single buffer's worth
of audio
function process(event) {
    // For this example, let's assume inputSamples and outputSamples are
stereo interleaved, although I'd like
    // to move to an API where these are non-interleaved.  This is a detail
we can discuss later.
    var inputSamples = event.inputSamples; // a Float32Array
    var outputSamples = event.outputSamples; // a Float32Array
    var n = event.numberOfSampleFrames; // number of sample-frames (for
example 4096 left and right samples)

    // DSP magic here where you would process n sample-frames from
inputSamples -> outputSamples...

    // We might need to have a commit() method (or something) here at the
end - hopefully not though...
    event.commit();
}

Let me know if this makes sense.

Cheers,
Chris


On Tue, Jul 13, 2010 at 9:38 AM, Corban Brook <corbanbrook@gmail.com> wrote:

> Hello Chris,
>
> Had another chance to go over your api today. I am going to be making a
> javascript layer implementation of your spec which will work on top of the
> mozilla audio data api.
> This should allow us to review and quickly prototype new features or
> changes on our working firefox implementation.
>
> One question Richard brought up on IRC which I could not find an answer for
> in your API is how do we add existing DOM audio elements to the graph? An
> audio element is in essence a Source and Destination node, How would I
> inject a lowpass filter into the pipeline? In the mozilla API we do this by
> muting the audio element and then reading out frames, filtering and then
> piping to a second non-DOM Audio element (Hacky, I know).
>
> Here is a rough setup of how this might work, could you fill in the gaps
> for me?
>
> var context = new AudioContext();
>
> var lowpass = context.createLowPass2Filter();
>
> var audio = document.getElementById('audioEle');
>
>
> function filterSound() {
>
>   var source = context.createAudioBuffer(audio); // perhaps passing in the audio element here generates 1 frame worth of buffer as it plays ?
>
>   source.connect(lowpass);
>
>   lowpass.connect(context.destination);
>
> }
>
> Re,
> Corban
>
Received on Tuesday, 13 July 2010 19:26:10 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:54:30 UTC