- From: Chris Rogers <crogers@google.com>
- Date: Thu, 21 Apr 2011 16:10:42 -0700
- To: Alistair Macdonald <al@bocoup.com>
- Cc: public-audio@w3.org
- Message-ID: <BANLkTimggSF21Dngvjh22DmauJ8YPNbR9g@mail.gmail.com>
On Thu, Apr 21, 2011 at 3:31 PM, Alistair Macdonald <al@bocoup.com> wrote: > Hi Chris Rogers, > > Digging a little deeper into the Web Audio spec here to build a few tests. > Enjoying the API so far, it feels nice to work with. It also seems pretty > glitch free (only tried OSX). > Hi Al, glad to help with your questions. > > I have two questions: > > 1) Can I download a Linux version from anywhere yet to test? (even if it is > not release-ready) > We're working very hard to get something out there for Linux and Windows. We have some unique challenges in the Chrome audio back-end because of the "sandboxing" model. Basically, each tab runs in a separate process and the browser itself has its own process. So there's a pretty intense audio IPC mechanism which has its own platform-specific idiosyncrasies. > > 2) Is there a better way generate simple tones from JavaScript than the > following method? > > var context = new webkitAudioContext(), > ptr = 0, > jsProc = context.createJavaScriptNode( 2048 ); > > jsProc.onaudioprocess = function( e ){ > var outl = event.outputBuffer.getChannelData(0), > outr = event.outputBuffer.getChannelData(1), > n = e.inputBuffer.getChannelData(0).length; > for (var i = 0; i < n; ++i) { > outl[i] = Math.sin((i+ptr)/40); > outr[i] = Math.sin((i+ptr)/40); > } > > ptr+=i; > }; > > var source = context.createBufferSource(); > source.connect( jsProc ); > jsProc.connect( context.destination ); > > > This seems to work, but I am unsure whether this is the ideal method for > generating a simple tone with JavaScript? I'm asking because it feels a > little odd to be using an event from a silent stream to generate the data. > > Perhaps I should be thinking of this event as the point in time where the > audio engine calls for the mixing down all buffers connected to the > context.destination, rather than thinking of it as new data being available > to the stream? > > > -- Alistair > The general form of the API is somewhat similar to what's exposed in Flash, so should be familiar to those who have already been working with Flash audio. But the API should be simplified for the output-only case, such as you're doing here. Because in your case you're not processing audio, but synthesizing (generating) it, it's *currently* necessary to connect a "dummy" source to the JavaScriptAudioNode. Then in your callback, you're basically ignoring the audio input, since you're synthesizing instead of processing. Ideally, we would be able to specify in the createJavaScriptNode() call that we're not interested in any input and only want to have an output. Then we wouldn't need to connect a "dummy" source. This is something which should be improved in the API, but I simply haven't yet had time to do it. But, in the mean-time the work-around of connecting a dummy source (although clunky) will achieve the same effect. Cheers, Chris
Received on Thursday, 21 April 2011 23:11:15 UTC