- From: Ryan Berdeen <ryan@ryanberdeen.com>
- Date: Fri, 13 May 2011 00:48:19 -0400
- To: Chris Rogers <crogers@google.com>
- Cc: Alistair Macdonald <al@bocoup.com>, public-audio@w3.org
Hi Chris,
Glad you found the example useful. I tried to make it as terse and
simple as possible, so I glossed over a few points.
For the call
context.createJavaScriptNode(1024, 1, 1);
I think the second argument should be 0, as there are no inputs. For
the third argument, I'm not sure I understand you exactly. The intent
here was to have a single output buffer and use only the first channel
of that buffer. Have I got this right? The example differs from the
spec, because it looks like in the current implementation the
outputBuffer attribute of the event is just a buffer, rather than an
array of numberOfOutputs buffers. Regardless, I think the example
would be more clear if it filled both channels.
Finally, yes, it should definitely use the sampleRate.
- Ryan
On Wed, May 4, 2011 at 3:33 PM, Chris Rogers <crogers@google.com> wrote:
> Hi Al,
> I just stumbled upon an example by Ryan Berdeen doing almost exactly the
> same thing as you here:
> http://things.ryanberdeen.com/post/3971100191/web-audio-api-generating-sound
> I really like this example for its simplicity:
> **************************************
> window.AudioContext = window.webkitAudioContext;
> var context = new AudioContext();
> var node = context.createJavaScriptNode(1024, 1, 1);
> var p = 0;
> node.onaudioprocess = function (e) {
> var data = e.outputBuffer.getChannelData(0);
> for (var i = 0; i < data.length; i++) {
> data[i] = Math.sin(p++);
> }
> };
> function play() {
> node.connect(context.destination);
> }
> function pause() {
> node.disconnect();
> }
> **************************************
> So, even with the current implementation, it's possible to skip the awkward
> step of connection a "dummy" source. And, I think this code is starting to
> look almost ideal and really very simple.
> It's not completely perfect yet:
> * The method call: context.createJavaScriptNode(1024, 1, 1);
> The second argument is currently supposed to be the number of inputs
> (which can be ignored as in the above example).
> The third argument is currently supposed to be the number of outputs, and
> *not* the number of channels for a single output. This distinction between
> number of outputs and number of channels is subtle, but important and
> something I need to explain better. I also need to implement it more
> properly. One option here is simply to only allow a single input (which can
> be ignored) and single output. Then these method arguments *can* be taken
> to mean the number of channels. I think for nearly all cases this will be
> sufficient, but doesn't completely exploit the possibilities of rendering
> distinct multiple outputs that AudioNodes are capable of...
> * I would recommend that people generating sin waves, or any other DSP, make
> use of the AudioContext .sampleRate attribute, since the sample-rate may be
> different depending on the machine or implementation. And when generating
> an audio stream sample-by-sample, it's important to factor in this
> sample-rate.
> Chris
>
>
>
>
>
> On Thu, Apr 21, 2011 at 3:31 PM, Alistair Macdonald <al@bocoup.com> wrote:
>>
>> Hi Chris Rogers,
>> Digging a little deeper into the Web Audio spec here to build a few tests.
>> Enjoying the API so far, it feels nice to work with. It also seems pretty
>> glitch free (only tried OSX).
>> I have two questions:
>> 1) Can I download a Linux version from anywhere yet to test? (even if it
>> is not release-ready)
>> 2) Is there a better way generate simple tones from JavaScript than the
>> following method?
>> var context = new webkitAudioContext(),
>> ptr = 0,
>> jsProc = context.createJavaScriptNode( 2048 );
>> jsProc.onaudioprocess = function( e ){
>> var outl = event.outputBuffer.getChannelData(0),
>> outr = event.outputBuffer.getChannelData(1),
>> n = e.inputBuffer.getChannelData(0).length;
>> for (var i = 0; i < n; ++i) {
>> outl[i] = Math.sin((i+ptr)/40);
>> outr[i] = Math.sin((i+ptr)/40);
>> }
>>
>> ptr+=i;
>> };
>> var source = context.createBufferSource();
>> source.connect( jsProc );
>> jsProc.connect( context.destination );
>>
>> This seems to work, but I am unsure whether this is the ideal method for
>> generating a simple tone with JavaScript? I'm asking because it feels a
>> little odd to be using an event from a silent stream to generate the data.
>> Perhaps I should be thinking of this event as the point in time where the
>> audio engine calls for the mixing down all buffers connected to the
>> context.destination, rather than thinking of it as new data being available
>> to the stream?
>>
>> -- Alistair
>
Received on Friday, 13 May 2011 04:48:50 UTC