scheduling subgraphs

On the subgraph-scheduling topic that we discussed on the call today,  
we resolved that we'd work through some code examples to understand  
the issue of subgraph scheduling better.  I would like to try to take  
a first step on this. If it doesn't feel valuable to go further with  
it, then I'll be fine with moving on!

The issue for me is that I would like to be able to define a "local  
time origin" that is used to transform all time values used by  
noteOn(), startAt(), automateAt()... basically, most functions that  
care about time.  Right now these are all scheduled relative to an  
"absolute time origin" that is associated with the owning  
AudioContext, which I feel is a bit inconvenient and requires extra  
parameters in every function in the call tree that makes a node graph.

This feels to me like it's a low-impact thing to implement -- but only  
if people feel it's worth it.  Let me make a concrete proposal that  
seems cheap and easy, and try to show how it affects couple of simple  
use cases.  My proposal is adapted directly from the notion of  
transforms in the HTML5 Canvas specification, and consists of three  
functions on AudioContext: offset(), save(), restore().  AudioContext  
also acquires a new attribute: "currentOffset". Here are their  
definitions:

   Object transform: an Object with a numeric "offset" property which  
affects any time-based property of an object created from this  
AudioContext. Other properties could be added, e.g "gain". The idea is  
that these are properties that make sense to affect a wide array of  
objects.
   void offset(float delta): adds delta to the value of transform.offset
   save(): pushes a copy of "transform" onto an internal stack in the  
AudioContext
   restore(): pops an object from the same internal stack into  
"transform"

Implementation concept: The parent AudioContext's currentOffset value  
is automatically added to any time-valued parameters that are used in  
scheduling functions on a node, such as noteOn(), etc.

USE CASES

The main difference is simple: with local time offsets saved in the  
context, one can eliminate a whole bunch of "startTime" parameters  
that need to be passed through everywhere.  This may not seem like  
much of a saving, but it feels cleaner to me, and if the spec ever  
starts to generalize the notion of a saved/restored transform to  
include other variables besides time (e.g. a "local gain" or "local  
pan"), it starts really paying off.  You don't want to go back and ask  
developers to add a whole bunch of new parameters to existing  
functions and pass all these values through everywhere.

I'm going to give two use cases. The second one builds on the first.

CASE 1. One wishes to play a sequence of audio buffers at some  
sequence of evenly spaced times starting right now.

Code needed today:

function main() {
   var context = new AudioContext();
   playSequence(context, [/* buffers */], 0.25, context.currentTime);
}

function playSequence(context, bufferList, interval, startTime) {
   for (var buffer in bufferList) {
     playBuffer(context, buffer, startTime);
     startTime += interval;
   }
}

function playBuffer(context, buffer, startTime) {
   node = context.createBufferSource();
   node.buffer = buffer;
   node.noteOn(startTime);
   node.connect(context.destination);
}

Code needed with time-offset transforms:

function main() {
   var context = new AudioContext();
   context.offset(context.currentTime);  // from here on out, all time  
offsets are relative to "now"
   playSequence(context, [/* buffers */], 0.25);
}


function playSequence(context, bufferList, interval) {
   for (var buffer in bufferList) {
     playBuffer(context, buffer);
     context.offset(interval);
   }
}

function playBuffer(context, buffer) {
   node = context.createBufferSource();
   node.buffer = buffer;
   node.noteOn(0);  // starts relative to local time offset determined  
by caller
   node.connect(context.destination);
}

CASE 2: builds on CASE 1 by playing a supersequence of sequences, with  
its own time delay between the onset of lower-level sequences.

Code needed today:

function main() {
   var context = new AudioContext();
   playSupersequence(context, [/* buffers */], 10, 5.0,  
context.currentTime);
}

function playSupersequence(context, bufferList, repeatCount, interval,  
startTime) {
   for (var i = 0; i < repeatCount; i++) {
     playSequence(context, [/* buffers */], 0.25, startTime + (i *  
interval));
   }
}

function playSequence(context, bufferList, interval, startTime) {
   for (var buffer in bufferList) {
     playBuffer(context, buffer, startTime);
     startTime += interval;
   }
}

function playBuffer(context, buffer, startTime) {
   node = context.createBufferSource();
   node.buffer = buffer;
   node.noteOn(startTime);
   node.connect(context.destination);
}

Code needed with time-offset transforms:

function main() {
   var context = new AudioContext();
   context.offset(context.currentTime);
   playSupersequence(context, [/* buffers */], 10, 5.0);
}

function playSupersequence(context, bufferList, repeatCount, interval) {
   for (var i = 0; i < repeatCount; i++) {
     playSequence(context, [/* buffers */], 0.25);
     context.offset(interval);
   }
}

// Note use of save() and restore() to allow this function to preserve  
caller's time shift
function playSequence(context, bufferList, interval) {
   save();
   for (var buffer in bufferList) {
     playBuffer(context, buffer);
     context.offset(interval);
   }
   restore();
}

function playBuffer(context, buffer) {
   node = context.createBufferSource();
   node.buffer = buffer;
   node.noteOn(0);  // starts relative to local time offset determined  
by caller
   node.connect(context.destination);
}


... .  .    .       Joe

Joe Berkovitz
President
Noteflight LLC
160 Sidney St, Cambridge, MA 02139
phone: +1 978 314 6271
www.noteflight.com

Received on Tuesday, 19 October 2010 01:56:52 UTC