Re: Sample-accurate JS output (was: scheduling subgraphs)

On Tue, Oct 19, 2010 at 12:34 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
> After reading Joe T's note and looking at the spec, I agree that this is a
> important issue and that there are some missing features for programmatic
> audio generation in a JS node.  I haven't used JS to generate audio yet and
> so I didn't run into it yet. I also missed it during my API review.  I think
> the issue is somewhat orthogonal to the subgraph-scheduling problem,
> although subgraph-scheduling certainly helps, so I retitled the thread --
> hope that's OK.

Yes, sorry - I think I meandered quite a long way from your original post!

> Joe T's proposal is very similar to the Performance object in StandingWave,
> which schedules the output of an arbitrary set of upstream nodes in a way
> that is completely transparent to the programmer.  I think that even
> though the Performance/native-scheduler concept hasn't really caught on with
> the group, the inability to schedule programmatic generation is a
> significant gap in the spec and there should be some way of addressing it.
>  People will definitely want to "generate a sine wave 5 seconds from now".
>  Rendering a programmatic source into a buffer and scheduling it is not an
> answer, since the source might be continuous.

I think partly my point is that the native-scheduler concept is almost
already a part of the spec.  You can specify a time for noteOn
playback, which is essentially a native scheduler.  If this can be
made more generic and accessed from JavaScript (assuming this is
technically feasible) then noteOn, and sub-graph scheduling should be
almost trivial to implement on top of it.  I think maybe I'm missing
some of the subtleties how noteOn style scheduling might apply to
other AudioSourceNodes though.  Is there anything which I should have
read on this?

Joe

> Chris, can you explain how a JavaScriptAudioNode can be scheduled and also
> do sample-accurate generation?  It seems to me that such a node has no way
> of knowing what exact sample frames it is generating.  It can't look at
> context.currentTime because that reflects what is playing at the moment, not
> what the JSAudioNode is generating.  The event contains no information
> relating to the time at which generated output will actually be rendered.
> Even if we do not adopt a Performance/scheduler-like proposal, I see a few
> different ways to rectify the problem JSAudioNode:
> - introduce a "playbackTime" attribute into AudioProcessingEvent that
> indicates the AudioContext time at which the first sample of the generated
> output will be played back.  This is I think the minimum necessary to allow
> the node to know what it's supposed to do relative to the time dimension.
> - [this adds to the above point]  introduce a flavor of JSAudioNode that is
> only a generator, and which has a noteOn()/startAt() function.  Time-shift
> the value of playbackTime by the start time of the node.  This makes it
> extremely easy to code up such nodes, since they always "think" that they
> are simply generating batches of samples starting from time 0.  The
> implementation overhead for this time shift is the management of batches of
> JS output that are out of phase with the main sample-generation cycle. That
> doesn't seem so bad.
> - [further adding to the above point] offset the noteOn() time by the
> context's current transform time offset that was in effect when noteOn() was
> called (relative to my previous proposal).
> - [further addition] provide a noteOff() that causes the node to stop being
> asked for output after some designated time, or allow a duration to be
> passed
> So, to start generating a sine wave at time t, you'd simply do this:
> function sineWaveAt(t) {
>   sineWave = context.createJavaScriptOutputNode(...);
>   sineWave.onaudioprocess = function(e) {
>     for (var i...) { e.outputBuffer.getChannelData(0)[i] =
> Math.sin(e.playbackTime + i * freq / (2 * Math.PI)); }
>   }
>   sineWave.noteOn(t);
> }
> Consider the simplicity of the above, and the programming headaches that
> result from this node being called prior to the point at which generation
> should begin, and the ugly conditionals that will result, which many people
> will get wrong.
> ...joe
>
> On Oct 19, 2010, at 4:46 AM, Joe Turner wrote:
>
> Hi all,
> I'm afraid I've only been following the list for a couple of weeks,
> and so I have a feeling that this discussion may have been had
> already.  Also I may well be missing something in the spec so
> apologies in advance.
>
> That said, I think this post raises the wider issue of how scheduling
> will work.  With the current spec, as I understand it I can very
> easily say "play this sound in 5 seconds", however I need to write my
> own scheduler if I want to say "start generating a sine wave in 5
> seconds".  It seems to me that these are fundamentally very similar
> ideas, and from an API point of view should be treated the same.
>
> I think it would be more flexible if rather than concentrating on how
> note-on scheduling works, that a more generic scheduler object was
> introduced, either natively or in JavaScript.
>
> So Joe's examples would look something like this:
>
> CASE 1. One wishes to play a sequence of audio buffers at some
> sequence of evenly spaced times starting right now.
>
> function main() {
> var context = new AudioContext();
> context.scheduler = context.createScheduler();
> playSequence(context, [/* buffers */], 0.25);
> }
>
> function playSequence(context, bufferList, interval);
> for (var i=0; i<bufferList.length; i++) {
> // Schedule relative to the current time
> context.scheduler.scheduleRelative(interval * i // Time,
>                                    playBuffer, // Callback,
>                                    [context, bufferList[i]]); // Argument
> list
> }
> }
>
> function playBuffer(context, buffer) {
> node = context.createBufferSource();
> node.buffer = buffer;
> node.play(); // Instead of noteOn(0), as scheduling is independent of nodes
> node.connect(context.destination);
> }
>
> CASE 2: builds on CASE 1 by playing a supersequence of sequences, with
> its own time delay between the onset of lower-level sequences.
>
> function main() {
> var context = new AudioContext();
> context.scheduler = context.createScheduler();
> playSupersequence(scheduler, buffers, 10, 5.0);
> }
>
> function playSupersequence(context, bufferList, repeatCount, interval) {
> for (var i = 0; i < repeatCount; i++) {
>  context.scheduler.scheduleRelative(i * interval, playSequence,
> [context, bufferList, 0.25]);
> }
> }
>
> function playSequence(context, bufferList, interval);
> for (var i=0; i<bufferList.length; i++) {
> context.scheduler.scheduleRelative(interval * i // Time,
>                                    playBuffer, // Callback,
>                                    [context, bufferList[i]]); // Argument
> list
> }
> }
>
> function playBuffer(context, buffer) {
> node = context.createBufferSource();
> node.buffer = buffer;
> node.play();
> node.connect(destination);
> }
>
>
> This seems from an end-user's perspective to offer a lot more
> flexibility than what is currently possible, and more even than Joe's
> proposal.  In fact, it is relatively similar to how Supercollider
> (http://supercollider.sourceforge.net/) have their scheduling API,
> which seems to work fairly well for a wide range of applications.
> Anticipating the problems with this, I imagine there is some
> significant overhead in calling into javascript which could easily
> send the timing skew-whiff.  Again, I guess this would need trying out
> to see how much of a problem it would be.
> Apologies again if this has been discussed before,
> Joe
>
> On Tue, Oct 19, 2010 at 2:56 AM, Joseph Berkovitz <joe@noteflight.com>
> wrote:
>
> On the subgraph-scheduling topic that we discussed on the call today, we
>
> resolved that we'd work through some code examples to understand the issue
>
> of subgraph scheduling better.  I would like to try to take a first step on
>
> this. If it doesn't feel valuable to go further with it, then I'll be fine
>
> with moving on!
>
> The issue for me is that I would like to be able to define a "local time
>
> origin" that is used to transform all time values used by noteOn(),
>
> startAt(), automateAt()... basically, most functions that care about time.
>
>  Right now these are all scheduled relative to an "absolute time origin"
>
> that is associated with the owning AudioContext, which I feel is a bit
>
> inconvenient and requires extra parameters in every function in the call
>
> tree that makes a node graph.
>
> This feels to me like it's a low-impact thing to implement -- but only if
>
> people feel it's worth it.  Let me make a concrete proposal that seems cheap
>
> and easy, and try to show how it affects couple of simple use cases.  My
>
> proposal is adapted directly from the notion of transforms in the HTML5
>
> Canvas specification, and consists of three functions on AudioContext:
>
> offset(), save(), restore().  AudioContext also acquires a new attribute:
>
> "currentOffset". Here are their definitions:
>
>  Object transform: an Object with a numeric "offset" property which affects
>
> any time-based property of an object created from this AudioContext. Other
>
> properties could be added, e.g "gain". The idea is that these are properties
>
> that make sense to affect a wide array of objects.
>
>  void offset(float delta): adds delta to the value of transform.offset
>
>  save(): pushes a copy of "transform" onto an internal stack in the
>
> AudioContext
>
>  restore(): pops an object from the same internal stack into "transform"
>
> Implementation concept: The parent AudioContext's currentOffset value is
>
> automatically added to any time-valued parameters that are used in
>
> scheduling functions on a node, such as noteOn(), etc.
>
> USE CASES
>
> The main difference is simple: with local time offsets saved in the context,
>
> one can eliminate a whole bunch of "startTime" parameters that need to be
>
> passed through everywhere.  This may not seem like much of a saving, but it
>
> feels cleaner to me, and if the spec ever starts to generalize the notion of
>
> a saved/restored transform to include other variables besides time (e.g. a
>
> "local gain" or "local pan"), it starts really paying off.  You don't want
>
> to go back and ask developers to add a whole bunch of new parameters to
>
> existing functions and pass all these values through everywhere.
>
> I'm going to give two use cases. The second one builds on the first.
>
> CASE 1. One wishes to play a sequence of audio buffers at some sequence of
>
> evenly spaced times starting right now.
>
> Code needed today:
>
> function main() {
>
>  var context = new AudioContext();
>
>  playSequence(context, [/* buffers */], 0.25, context.currentTime);
>
> }
>
> function playSequence(context, bufferList, interval, startTime) {
>
>  for (var buffer in bufferList) {
>
>    playBuffer(context, buffer, startTime);
>
>    startTime += interval;
>
>  }
>
> }
>
> function playBuffer(context, buffer, startTime) {
>
>  node = context.createBufferSource();
>
>  node.buffer = buffer;
>
>  node.noteOn(startTime);
>
>  node.connect(context.destination);
>
> }
>
> Code needed with time-offset transforms:
>
> function main() {
>
>  var context = new AudioContext();
>
>  context.offset(context.currentTime);  // from here on out, all time offsets
>
> are relative to "now"
>
>  playSequence(context, [/* buffers */], 0.25);
>
> }
>
>
> function playSequence(context, bufferList, interval) {
>
>  for (var buffer in bufferList) {
>
>    playBuffer(context, buffer);
>
>    context.offset(interval);
>
>  }
>
> }
>
> function playBuffer(context, buffer) {
>
>  node = context.createBufferSource();
>
>  node.buffer = buffer;
>
>  node.noteOn(0);  // starts relative to local time offset determined by
>
> caller
>
>  node.connect(context.destination);
>
> }
>
> CASE 2: builds on CASE 1 by playing a supersequence of sequences, with its
>
> own time delay between the onset of lower-level sequences.
>
> Code needed today:
>
> function main() {
>
>  var context = new AudioContext();
>
>  playSupersequence(context, [/* buffers */], 10, 5.0, context.currentTime);
>
> }
>
> function playSupersequence(context, bufferList, repeatCount, interval,
>
> startTime) {
>
>  for (var i = 0; i < repeatCount; i++) {
>
>    playSequence(context, [/* buffers */], 0.25, startTime + (i * interval));
>
>  }
>
> }
>
> function playSequence(context, bufferList, interval, startTime) {
>
>  for (var buffer in bufferList) {
>
>    playBuffer(context, buffer, startTime);
>
>    startTime += interval;
>
>  }
>
> }
>
> function playBuffer(context, buffer, startTime) {
>
>  node = context.createBufferSource();
>
>  node.buffer = buffer;
>
>  node.noteOn(startTime);
>
>  node.connect(context.destination);
>
> }
>
> Code needed with time-offset transforms:
>
> function main() {
>
>  var context = new AudioContext();
>
>  context.offset(context.currentTime);
>
>  playSupersequence(context, [/* buffers */], 10, 5.0);
>
> }
>
> function playSupersequence(context, bufferList, repeatCount, interval) {
>
>  for (var i = 0; i < repeatCount; i++) {
>
>    playSequence(context, [/* buffers */], 0.25);
>
>    context.offset(interval);
>
>  }
>
> }
>
> // Note use of save() and restore() to allow this function to preserve
>
> caller's time shift
>
> function playSequence(context, bufferList, interval) {
>
>  save();
>
>  for (var buffer in bufferList) {
>
>    playBuffer(context, buffer);
>
>    context.offset(interval);
>
>  }
>
>  restore();
>
> }
>
> function playBuffer(context, buffer) {
>
>  node = context.createBufferSource();
>
>  node.buffer = buffer;
>
>  node.noteOn(0);  // starts relative to local time offset determined by
>
> caller
>
>  node.connect(context.destination);
>
> }
>
>
> ... .  .    .       Joe
>
> Joe Berkovitz
>
> President
>
> Noteflight LLC
>
> 160 Sidney St, Cambridge, MA 02139
>
> phone: +1 978 314 6271
>
> www.noteflight.com
>
>
>
>
>
>
>
>
> ... .  .    .       Joe
> Joe Berkovitz
> President
> Noteflight LLC
> 160 Sidney St, Cambridge, MA 02139
> phone: +1 978 314 6271
> www.noteflight.com
>
>
>
>
>

Received on Tuesday, 19 October 2010 14:44:15 UTC