Re: scheduling subgraphs

Hi all,
I'm afraid I've only been following the list for a couple of weeks,
and so I have a feeling that this discussion may have been had
already.  Also I may well be missing something in the spec so
apologies in advance.

That said, I think this post raises the wider issue of how scheduling
will work.  With the current spec, as I understand it I can very
easily say "play this sound in 5 seconds", however I need to write my
own scheduler if I want to say "start generating a sine wave in 5
seconds".  It seems to me that these are fundamentally very similar
ideas, and from an API point of view should be treated the same.

I think it would be more flexible if rather than concentrating on how
note-on scheduling works, that a more generic scheduler object was
introduced, either natively or in JavaScript.

So Joe's examples would look something like this:

CASE 1. One wishes to play a sequence of audio buffers at some
sequence of evenly spaced times starting right now.

function main() {
 var context = new AudioContext();
 context.scheduler = context.createScheduler();
 playSequence(context, [/* buffers */], 0.25);
}

function playSequence(context, bufferList, interval);
 for (var i=0; i<bufferList.length; i++) {
 // Schedule relative to the current time
 context.scheduler.scheduleRelative(interval * i // Time,
                                    playBuffer, // Callback,
                                    [context, bufferList[i]]); // Argument list
 }
}

function playBuffer(context, buffer) {
 node = context.createBufferSource();
 node.buffer = buffer;
 node.play(); // Instead of noteOn(0), as scheduling is independent of nodes
 node.connect(context.destination);
}

CASE 2: builds on CASE 1 by playing a supersequence of sequences, with
its own time delay between the onset of lower-level sequences.

function main() {
 var context = new AudioContext();
 context.scheduler = context.createScheduler();
 playSupersequence(scheduler, buffers, 10, 5.0);
}

function playSupersequence(context, bufferList, repeatCount, interval) {
 for (var i = 0; i < repeatCount; i++) {
  context.scheduler.scheduleRelative(i * interval, playSequence,
[context, bufferList, 0.25]);
 }
}

function playSequence(context, bufferList, interval);
 for (var i=0; i<bufferList.length; i++) {
 context.scheduler.scheduleRelative(interval * i // Time,
                                    playBuffer, // Callback,
                                    [context, bufferList[i]]); // Argument list
 }
}

function playBuffer(context, buffer) {
 node = context.createBufferSource();
 node.buffer = buffer;
 node.play();
 node.connect(destination);
}


This seems from an end-user's perspective to offer a lot more
flexibility than what is currently possible, and more even than Joe's
proposal.  In fact, it is relatively similar to how Supercollider
(http://supercollider.sourceforge.net/) have their scheduling API,
which seems to work fairly well for a wide range of applications.
Anticipating the problems with this, I imagine there is some
significant overhead in calling into javascript which could easily
send the timing skew-whiff.  Again, I guess this would need trying out
to see how much of a problem it would be.
Apologies again if this has been discussed before,
Joe

On Tue, Oct 19, 2010 at 2:56 AM, Joseph Berkovitz <joe@noteflight.com> wrote:
> On the subgraph-scheduling topic that we discussed on the call today, we
> resolved that we'd work through some code examples to understand the issue
> of subgraph scheduling better.  I would like to try to take a first step on
> this. If it doesn't feel valuable to go further with it, then I'll be fine
> with moving on!
>
> The issue for me is that I would like to be able to define a "local time
> origin" that is used to transform all time values used by noteOn(),
> startAt(), automateAt()... basically, most functions that care about time..
>  Right now these are all scheduled relative to an "absolute time origin"
> that is associated with the owning AudioContext, which I feel is a bit
> inconvenient and requires extra parameters in every function in the call
> tree that makes a node graph.
>
> This feels to me like it's a low-impact thing to implement -- but only if
> people feel it's worth it.  Let me make a concrete proposal that seems cheap
> and easy, and try to show how it affects couple of simple use cases.  My
> proposal is adapted directly from the notion of transforms in the HTML5
> Canvas specification, and consists of three functions on AudioContext:
> offset(), save(), restore().  AudioContext also acquires a new attribute:
> "currentOffset". Here are their definitions:
>
>  Object transform: an Object with a numeric "offset" property which affects
> any time-based property of an object created from this AudioContext. Other
> properties could be added, e.g "gain". The idea is that these are properties
> that make sense to affect a wide array of objects.
>  void offset(float delta): adds delta to the value of transform.offset
>  save(): pushes a copy of "transform" onto an internal stack in the
> AudioContext
>  restore(): pops an object from the same internal stack into "transform"
>
> Implementation concept: The parent AudioContext's currentOffset value is
> automatically added to any time-valued parameters that are used in
> scheduling functions on a node, such as noteOn(), etc.
>
> USE CASES
>
> The main difference is simple: with local time offsets saved in the context,
> one can eliminate a whole bunch of "startTime" parameters that need to be
> passed through everywhere.  This may not seem like much of a saving, but it
> feels cleaner to me, and if the spec ever starts to generalize the notion of
> a saved/restored transform to include other variables besides time (e.g. a
> "local gain" or "local pan"), it starts really paying off.  You don't want
> to go back and ask developers to add a whole bunch of new parameters to
> existing functions and pass all these values through everywhere.
>
> I'm going to give two use cases. The second one builds on the first.
>
> CASE 1. One wishes to play a sequence of audio buffers at some sequence of
> evenly spaced times starting right now.
>
> Code needed today:
>
> function main() {
>  var context = new AudioContext();
>  playSequence(context, [/* buffers */], 0.25, context.currentTime);
> }
>
> function playSequence(context, bufferList, interval, startTime) {
>  for (var buffer in bufferList) {
>    playBuffer(context, buffer, startTime);
>    startTime += interval;
>  }
> }
>
> function playBuffer(context, buffer, startTime) {
>  node = context.createBufferSource();
>  node.buffer = buffer;
>  node.noteOn(startTime);
>  node.connect(context.destination);
> }
>
> Code needed with time-offset transforms:
>
> function main() {
>  var context = new AudioContext();
>  context.offset(context.currentTime);  // from here on out, all time offsets
> are relative to "now"
>  playSequence(context, [/* buffers */], 0.25);
> }
>
>
> function playSequence(context, bufferList, interval) {
>  for (var buffer in bufferList) {
>    playBuffer(context, buffer);
>    context.offset(interval);
>  }
> }
>
> function playBuffer(context, buffer) {
>  node = context.createBufferSource();
>  node.buffer = buffer;
>  node.noteOn(0);  // starts relative to local time offset determined by
> caller
>  node.connect(context.destination);
> }
>
> CASE 2: builds on CASE 1 by playing a supersequence of sequences, with its
> own time delay between the onset of lower-level sequences.
>
> Code needed today:
>
> function main() {
>  var context = new AudioContext();
>  playSupersequence(context, [/* buffers */], 10, 5.0, context.currentTime);
> }
>
> function playSupersequence(context, bufferList, repeatCount, interval,
> startTime) {
>  for (var i = 0; i < repeatCount; i++) {
>    playSequence(context, [/* buffers */], 0.25, startTime + (i * interval));
>  }
> }
>
> function playSequence(context, bufferList, interval, startTime) {
>  for (var buffer in bufferList) {
>    playBuffer(context, buffer, startTime);
>    startTime += interval;
>  }
> }
>
> function playBuffer(context, buffer, startTime) {
>  node = context.createBufferSource();
>  node.buffer = buffer;
>  node.noteOn(startTime);
>  node.connect(context.destination);
> }
>
> Code needed with time-offset transforms:
>
> function main() {
>  var context = new AudioContext();
>  context.offset(context.currentTime);
>  playSupersequence(context, [/* buffers */], 10, 5.0);
> }
>
> function playSupersequence(context, bufferList, repeatCount, interval) {
>  for (var i = 0; i < repeatCount; i++) {
>    playSequence(context, [/* buffers */], 0.25);
>    context.offset(interval);
>  }
> }
>
> // Note use of save() and restore() to allow this function to preserve
> caller's time shift
> function playSequence(context, bufferList, interval) {
>  save();
>  for (var buffer in bufferList) {
>    playBuffer(context, buffer);
>    context.offset(interval);
>  }
>  restore();
> }
>
> function playBuffer(context, buffer) {
>  node = context.createBufferSource();
>  node.buffer = buffer;
>  node.noteOn(0);  // starts relative to local time offset determined by
> caller
>  node.connect(context.destination);
> }
>
>
> ... .  .    .       Joe
>
> Joe Berkovitz
> President
> Noteflight LLC
> 160 Sidney St, Cambridge, MA 02139
> phone: +1 978 314 6271
> www.noteflight.com
>
>
>
>
>
>
>

Received on Tuesday, 19 October 2010 08:58:49 UTC