Re: Sample-accurate JS output (was: scheduling subgraphs)

Apologies to Joe B who has already seen this once - I meant to send it
reply-to-all but have a brain like a sieve when it comes to email.

On Wed, Oct 20, 2010 at 11:45 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
> Further implementation thoughts on this issue -- this should address the
> many-short-notes cases as well as other pathological cases.
> When I say "JS nodes" here, by the way, I am only talking about *generator*
> JS nodes, i.e. JS nodes with no inputs.  I don't have any good ideas about
> JS nodes that act as filters, I think if one has a lot of those one may be
> inherently hosed in terms of performance.

I think you might only be partly hosed, depending on what your graph
looks like.  So for example

 JSGenerator -> JSFilter -> JSFilter -> NativeFilter -> Output

should only need a single AudioProcessingEvent (similar to the basic
JSGenerator -> Output case).

In contrast

JSGenerator -> JSFilter -> NativeFilter -> JSFilter -> Output

would need two AudioProcessingEvents.

I think all this would need is for the AudioProcessingEvent to be sent
to an intermediate JavaScript function which looked something like:

JavaScriptAudioNode.prototype._preonaudioprocess =
function(audioProcessingEvent) {
   this.onaudioprocess(audioProcessingEvent);
   if (this._isConnectedToAnotherJSAudioNode()) {
       this._swapInputAndOutputBuffers(audioProcessingEvent);
       this._getNextAudioNode._preonaudioprocess(audioProcessingEvent);
   }
}

Does this sound plausable?  I think this fits in fine with the idea of
only sending AudioProcessingEvents to the nodes which are active.

On Wed, Oct 20, 2010 at 11:45 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
> Further implementation thoughts on this issue -- this should address the
> many-short-notes cases as well as other pathological cases.
> When I say "JS nodes" here, by the way, I am only talking about *generator*
> JS nodes, i.e. JS nodes with no inputs.  I don't have any good ideas about
> JS nodes that act as filters, I think if one has a lot of those one may be
> inherently hosed in terms of performance.
>
> The goal is to restrict JS activity to only those JS generator nodes which
> can contribute output to a synchronous processing batch, and to pad each
> node's output on either side as needed to fill out its buffers to the size
> expected by the audio engine.  Each node only "sees" a request for some # of
> samples at some specified start time as specified in the
> AudioProcessingEvent, and doesn't have to worry about padding or about being
> called at an inappropriate time.
> 1. In general do not allow JS nodes to determine their own buffer size.
>  Provide a event.bufferLength attribute in AudioProcessingEvent which JS
> nodes will respect: they are expected to return buffer(s) of exactly this
> length with the first sample reflecting the generated signal at
> event.playbackTime.  Dispense with the ability to specify a bufferLength at
> JS node creation time; the audio engine is in charge, not the programmer.
> 2. (rough outline of algorithm, ignoring threading issues -- idea is to
> context-switch once and process all JS generator nodes in one gulp)
>    let N be number of samples in a synchronous processing batch for the
> audio engine (i.e. a graph-wide batch pushed through all nodes to the
> destination)
>    let batchTime be the current rendering time of the first sample in the
> batch
>    let startTime, endTime be start, end times of some JS generator node
> (i.e. the noteOn/startAt() or noteOff()/stopAt() times)
>    consider a node active if the range (batchTime, batchTime +
> (N-1)*sampleRate) intersects the range (startTime, endTime)
>    dispatch an AudioProcessingEvent to such a node, where the event's
> playbackTime and bufferLength together describe the above intersected range
> (which will usually be an entire processing batch of N samples).  The result
> may be less than N samples, however, if the node became active or inactive
> during the processing batch.
>    left-pad the returned samples by (startTime - batchTime) / sampleRate,
> restricting to range 0 .. N
>    right-pad the returned samples by N - ((endTime - batchTime) /
> sampleRate) restricting to range 0 .. N
> I didn't make this algorithm up from scratch, it's adapted from the
> StandingWave Performance code, so I believe it pretty much works.
> ... .  .    .       Joe
> Joe Berkovitz
> President
> Noteflight LLC
> 160 Sidney St, Cambridge, MA 02139
> phone: +1 978 314 6271
> www.noteflight.com
>
> On Oct 20, 2010, at 3:27 PM, Chris Rogers wrote:
>
> Yes, that's what I've been thinking as well.  There's still the
> buffering/latency issue which will affect how near into the future it will
> be possible to schedule these types of events, but I suppose that's a given.
>  Also, there could be pathological cases where there are many very short
> notes which aren't exactly at the same time, but close.  Then they wouldn't
> be processed properly in the batch.  But, with the proper kind of algorithm,
> maybe even these cases could be coalesced if great care were taken, and
> possibly at the cost of even greater buffering.
> Chris
>
> On Wed, Oct 20, 2010 at 12:51 PM, Joseph Berkovitz <joe@noteflight.com>
> wrote:
>
>>
>> Implementation thought:
>> I was thinking, if all JS nodes process sample batches in lock step, can
>> all active JS nodes be scheduled to run in sequence in a single thread
>> context switch, instead of context-switching once per node?
>
>

Received on Thursday, 21 October 2010 16:15:57 UTC