- From: Chris Rogers <crogers@google.com>
- Date: Wed, 4 May 2011 12:33:16 -0700
- To: Alistair Macdonald <al@bocoup.com>
- Cc: public-audio@w3.org
- Message-ID: <BANLkTinzpMmCzo0DW5CseJK4jD7YJPSQNw@mail.gmail.com>
Hi Al, I just stumbled upon an example by Ryan Berdeen doing almost exactly the same thing as you here: http://things.ryanberdeen.com/post/3971100191/web-audio-api-generating-sound I really like this example for its simplicity: ************************************** window.AudioContext = window.webkitAudioContext; var context = new AudioContext(); var node = context.createJavaScriptNode(1024, 1, 1); var p = 0; node.onaudioprocess = function (e) { var data = e.outputBuffer.getChannelData(0); for (var i = 0; i < data.length; i++) { data[i] = Math.sin(p++); } }; function play() { node.connect(context.destination); } function pause() { node.disconnect(); } ************************************** So, even with the current implementation, it's possible to skip the awkward step of connection a "dummy" source. And, I think this code is starting to look almost ideal and really very simple. It's not completely perfect yet: * The method call: context.createJavaScriptNode(1024, 1, 1); The second argument is currently supposed to be the number of inputs (which can be ignored as in the above example). The third argument is currently supposed to be the number of outputs, and *not* the number of channels for a single output. This distinction between number of outputs and number of channels is subtle, but important and something I need to explain better. I also need to implement it more properly. One option here is simply to only allow a single input (which can be ignored) and single output. Then these method arguments *can* be taken to mean the number of channels. I think for nearly all cases this will be sufficient, but doesn't completely exploit the possibilities of rendering distinct multiple outputs that AudioNodes are capable of... * I would recommend that people generating sin waves, or any other DSP, make use of the AudioContext .sampleRate attribute, since the sample-rate may be different depending on the machine or implementation. And when generating an audio stream sample-by-sample, it's important to factor in this sample-rate. Chris On Thu, Apr 21, 2011 at 3:31 PM, Alistair Macdonald <al@bocoup.com> wrote: > Hi Chris Rogers, > > Digging a little deeper into the Web Audio spec here to build a few tests. > Enjoying the API so far, it feels nice to work with. It also seems pretty > glitch free (only tried OSX). > > I have two questions: > > 1) Can I download a Linux version from anywhere yet to test? (even if it is > not release-ready) > > 2) Is there a better way generate simple tones from JavaScript than the > following method? > > var context = new webkitAudioContext(), > ptr = 0, > jsProc = context.createJavaScriptNode( 2048 ); > > jsProc.onaudioprocess = function( e ){ > var outl = event.outputBuffer.getChannelData(0), > outr = event.outputBuffer.getChannelData(1), > n = e.inputBuffer.getChannelData(0).length; > for (var i = 0; i < n; ++i) { > outl[i] = Math.sin((i+ptr)/40); > outr[i] = Math.sin((i+ptr)/40); > } > > ptr+=i; > }; > > var source = context.createBufferSource(); > source.connect( jsProc ); > jsProc.connect( context.destination ); > > > This seems to work, but I am unsure whether this is the ideal method for > generating a simple tone with JavaScript? I'm asking because it feels a > little odd to be using an event from a silent stream to generate the data. > > Perhaps I should be thinking of this event as the point in time where the > audio engine calls for the mixing down all buffers connected to the > context.destination, rather than thinking of it as new data being available > to the stream? > > > -- Alistair >
Received on Wednesday, 4 May 2011 19:33:42 UTC