Re: JavaScriptNode Tone Generation

Hi allm

Since we are starting to implicitly define some "ideal practices" here, I want to highlight a couple of additional points about this example:

> * I would recommend that people generating sin waves, or any other DSP, make use of the AudioContext .sampleRate attribute, since the sample-rate may be different depending on the machine or implementation.  And when generating an audio stream sample-by-sample, it's important to factor in this sample-rate.


I would like to go beyond that and modify this example to show the correct way to generate phase-accurate waveforms at a given frequency, in a manner that allows reuse of the rendering function across different nodes (to play simultaneous or staggered tones). This is not much more complex than the given example, and it does not steer people in a direction that will break when they start reaching deeper into the API.

The global "p" counter isn't really the best way to go as it places the burden of tracking a global time position on the developer.  We really want rendering functions to use a time position pointer passed in by the framework, shown below as e.playbackPosition (I can't find the current spec in the new structure so forgive me if I misremembered how this is supposed to work). 

So I think we want something slightly different, namely:

window.AudioContext = window.webkitAudioContext;

var context = new AudioContext();
var node = context.createJavaScriptNode(1024, 1, 1);
var freq = 440;  // (Hertz)

node.onaudiorender = function (e) {
    var data = e.outputBuffer.getChannelData(0);
    for (var i = 0; i < data.length; i++) {
        data[i] = Math.sin(freq * (e.playbackPosition + (i / context.sampleRate)));
    }
};

Furthermore, each rendering call may be taking place at a different point in the generation of different simultaneous/staggered sine waves. Ideally one will want to create multiple nodes that all share the same audio rendering function.  Here is another version of this idea, one in which the "frequency" parameter becomes a property of a "parameters" object belonging to the node, and navigable through the event object.  The parameters could be passed into the node constructor.

In this approach the "freq" variable shown above can be eliminated, allowing each JSNode object to carry 100% of the information required for rendering. Consequently the rendering function can now be shared between multiple sine-wave-generating nodes:

var context = new AudioContext();
function renderSine(e)
{
    var data = e.outputBuffer.getChannelData(0);
    for (var i = 0; i < data.length; i++) {
        data[i] = Math.sin(e.node.parameters.frequency * (e.playbackPosition + (i / context.sampleRate)));
    }
}

var node1 = context.createJavaScriptNode(1024, 1, 1, {frequency: 440});
node1.onaudiorender = renderSine;

var node2 = context.createJavaScriptNode(1024, 1, 1, {frequency: 880});
node2.onaudiorender = renderSine;

... .  .    .       Joe

Joe Berkovitz
President
Noteflight LLC
84 Hamilton St, Cambridge, MA 02139
phone: +1 978 314 6271
www.noteflight.com


On May 4, 2011, at 3:33 PM, Chris Rogers wrote:

> Hi Al,
> 
> I just stumbled upon an example by Ryan Berdeen doing almost exactly the same thing as you here:
> http://things.ryanberdeen.com/post/3971100191/web-audio-api-generating-sound
> 
> I really like this example for its simplicity:
> 
> **************************************
> 
> window.AudioContext = window.webkitAudioContext;
> 
> var context = new AudioContext();
> var node = context.createJavaScriptNode(1024, 1, 1);
> var p = 0;
> 
> node.onaudioprocess = function (e) {
>     var data = e.outputBuffer.getChannelData(0);
>     for (var i = 0; i < data.length; i++) {
>         data[i] = Math.sin(p++);
>     }
> };
> 
> function play() {
>     node.connect(context.destination);
> }
> 
> function pause() {
>     node.disconnect();
> }
> 
> **************************************
> 
> So, even with the current implementation, it's possible to skip the awkward step of connection a "dummy" source.  And, I think this code is starting to look almost ideal and really very simple.
> 
> It's not completely perfect yet:
> 
> * The method call:  context.createJavaScriptNode(1024, 1, 1);
>   The second argument is currently supposed to be the number of inputs (which can be ignored as in the above example).
>   The third argument is currently supposed to be the number of outputs, and *not* the number of channels for a single output.  This distinction between number of outputs and number of channels is subtle, but important and something I need to explain better.  I also need to implement it more properly.  One option here is simply to only allow a single input (which can be ignored) and single output.  Then these method arguments *can* be taken to mean the number of channels.  I think for nearly all cases this will be sufficient, but doesn't completely exploit the possibilities of rendering distinct multiple outputs that AudioNodes are capable of...
> 
> * I would recommend that people generating sin waves, or any other DSP, make use of the AudioContext .sampleRate attribute, since the sample-rate may be different depending on the machine or implementation.  And when generating an audio stream sample-by-sample, it's important to factor in this sample-rate.
> 
> Chris
> 
> 
> 
> 
> 
> 
> On Thu, Apr 21, 2011 at 3:31 PM, Alistair Macdonald <al@bocoup.com> wrote:
> Hi Chris Rogers,
> 
> Digging a little deeper into the Web Audio spec here to build a few tests. Enjoying the API so far, it feels nice to work with. It also seems pretty glitch free (only tried OSX).
> 
> I have two questions:
> 
> 1) Can I download a Linux version from anywhere yet to test? (even if it is not release-ready)
> 
> 2) Is there a better way generate simple tones from JavaScript than the following method?
> 
>   var context = new webkitAudioContext(),
>     ptr = 0,
>     jsProc = context.createJavaScriptNode( 2048 );
> 
>   jsProc.onaudioprocess = function( e ){
> 	var outl = event.outputBuffer.getChannelData(0),
> 	  outr = event.outputBuffer.getChannelData(1),
> 	  n = e.inputBuffer.getChannelData(0).length;
> 	
> 	for (var i = 0; i < n; ++i) {
>           outl[i] = Math.sin((i+ptr)/40);
>           outr[i] = Math.sin((i+ptr)/40);
>         }
>         
>     ptr+=i;
>   };
> 
>   var source = context.createBufferSource();
>   source.connect( jsProc );
>   jsProc.connect( context.destination );
> 
> 
> This seems to work, but I am unsure whether this is the ideal method for generating a simple tone with JavaScript? I'm asking because it feels a little odd to be using an event from a silent stream to generate the data.
> 
> Perhaps I should be thinking of this event as the point in time where the audio engine calls for the mixing down all buffers connected to the context.destination, rather than thinking of it as new data being available to the stream?
> 
> 
> -- Alistair
> 

Received on Wednesday, 4 May 2011 20:20:12 UTC