Re: Sample-accurate JS output (was: scheduling subgraphs)

Hi Joe,

I think you have some good points about letting the audio engine pick a
"good" buffer size for JavaScriptAudioNode.  As you point out, the tuning
can best be determined by the implementation, and will likely vary depending
on OS platform and browser.  Unless others strongly disagree, we should
change the spec accordingly.

However, this buffer size (for JavaScript-based processing) will generally
be somewhat larger than the buffer size used internally by the other (native
processing) AudioNodes.  The native nodes can run at a smaller buffer size
for much better latency.  For example, consider the use cases of "play sound
now" from key or mouse event, and processing live audio input (via <device>
tag or whatever it turns out to be).

Chris


the other nodes in the AudioContext will internally run at

On Tue, Oct 26, 2010 at 1:23 PM, Joseph Berkovitz <joe@noteflight.com>wrote:

> Thanks Chris!
>
> Your post prompts another thought: one more reason to pass the buffer size
> into a JavaScriptAudioNode via AudioProcessingEvent (as opposed to letting
> the node dictate) is that it could be useful to tune the AudioContext's
> behavior as a whole, for low latency vs. high stability.  This setting on
> the AudioContext -- whatever form it happens to take -- would then drive the
> buffer size and batching behavior of the whole engine, in a top-down kind of
> way.  So I think this is a highly desirable feature to add.
>
> For example Noteflight values playback robustness over latency -- there is
> little or no real-time control of sound output.  However, other applications
> will have a completely different profile and might want really low latency
> even if there's some risk of jitter or buffer underrun.  And many
> applications would want to occupy some kind of middle ground.
>
> The Flash Player 10 audio API supports a concept of globally requesting low
> latency vs. playback stability by a very quirky approach that I wouldn't
> want to see us emulate, but the end result is that one winds up indirectly
> specifying a "batch size" for audio buffers between 1K and 8K samples.
>  StandingWave then simply picks this batch size up from Flash and propagates
> it through its own code.
>
> ...Joe
>
> On Oct 21, 2010, at 1:29 PM, Chris Rogers wrote:
>
> Hi Joe,
>
> Thanks for the very detailed description.  Interestingly, this is
> effectively what I'm already doing for AudioBufferSourceNode internally,
> minus the batching and threading stuff.  The number N in the current engine
> is 128 @44.1KHz (for low latency) so I think this would be too small of a
> batch size to dispatch periodically to the main thread.  But, this can be
> easily solved by buffering into larger chunks, which I'm already doing in my
> current JavaScriptAudioNode.
>
> It would be good to have the API for the generator (output-only) and the
> processor (input and output) case be very close or the same, even if this
> optimization is only generally possible for the generator.  Currently I have
> a JavaScriptAudioNode to handle both cases...
>
> Anyway, I really appreciate your great insights and experience here!
>
> Cheers,
> Chris
>
>
>
> On Wed, Oct 20, 2010 at 3:45 PM, Joseph Berkovitz <joe@noteflight.com>wrote:
>
>> Further implementation thoughts on this issue -- this should address the
>> many-short-notes cases as well as other pathological cases.
>>
>> When I say "JS nodes" here, by the way, I am only talking about
>> *generator* JS nodes, i.e. JS nodes with no inputs.  I don't have any good
>> ideas about JS nodes that act as filters, I think if one has a lot of those
>> one may be inherently hosed in terms of performance.
>>
>> The goal is to restrict JS activity to only those JS generator nodes which
>> can contribute output to a synchronous processing batch, and to pad each
>> node's output on either side as needed to fill out its buffers to the size
>> expected by the audio engine.  Each node only "sees" a request for some # of
>> samples at some specified start time as specified in the
>> AudioProcessingEvent, and doesn't have to worry about padding or about being
>> called at an inappropriate time.
>>
>> 1. In general do not allow JS nodes to determine their own buffer size.
>>  Provide a event.bufferLength attribute in AudioProcessingEvent which JS
>> nodes will respect: they are expected to return buffer(s) of exactly this
>> length with the first sample reflecting the generated signal at
>> event.playbackTime.  Dispense with the ability to specify a bufferLength at
>> JS node creation time; the audio engine is in charge, not the programmer.
>>
>> 2. (rough outline of algorithm, ignoring threading issues -- idea is to
>> context-switch once and process all JS generator nodes in one gulp)
>>    let N be number of samples in a synchronous processing batch for the
>> audio engine (i.e. a graph-wide batch pushed through all nodes to the
>> destination)
>>    let batchTime be the current rendering time of the first sample in the
>> batch
>>    let startTime, endTime be start, end times of some JS generator node
>> (i.e. the noteOn/startAt() or noteOff()/stopAt() times)
>>    consider a node active if the range (batchTime, batchTime +
>> (N-1)*sampleRate) intersects the range (startTime, endTime)
>>    dispatch an AudioProcessingEvent to such a node, where the event's
>> playbackTime and bufferLength together describe the above intersected range
>> (which will usually be an entire processing batch of N samples).  The result
>> may be less than N samples, however, if the node became active or inactive
>> during the processing batch.
>>    left-pad the returned samples by (startTime - batchTime) / sampleRate,
>> restricting to range 0 .. N
>>    right-pad the returned samples by N - ((endTime - batchTime) /
>> sampleRate) restricting to range 0 .. N
>>
>> I didn't make this algorithm up from scratch, it's adapted from the
>> StandingWave Performance code, so I believe it pretty much works.
>>
>> ... .  .    .       Joe
>>
>> *Joe Berkovitz*
>> President
>> Noteflight LLC
>> 160 Sidney St, Cambridge, MA 02139
>> phone: +1 978 314 6271
>> www.noteflight.com
>>
>>
>> On Oct 20, 2010, at 3:27 PM, Chris Rogers wrote:
>>
>> Yes, that's what I've been thinking as well.  There's still the
>> buffering/latency issue which will affect how near into the future it will
>> be possible to schedule these types of events, but I suppose that's a given.
>>  Also, there could be pathological cases where there are many very short
>> notes which aren't exactly at the same time, but close.  Then they wouldn't
>> be processed properly in the batch.  But, with the proper kind of algorithm,
>> maybe even these cases could be coalesced if great care were taken, and
>> possibly at the cost of even greater buffering.
>>
>> Chris
>>
>> On Wed, Oct 20, 2010 at 12:51 PM, Joseph Berkovitz <joe@noteflight.com>wrote:
>>
>>
>>>  Implementation thought:
>>>
>>> I was thinking, if all JS nodes process sample batches in lock step, can
>>> all active JS nodes be scheduled to run in sequence in a single thread
>>> context switch, instead of context-switching once per node?
>>>
>>>
>>
>
> ... .  .    .       Joe
>
> *Joe Berkovitz*
> President
> Noteflight LLC
> 160 Sidney St, Cambridge, MA 02139
> phone: +1 978 314 6271
> www.noteflight.com
>
>
>
>
>
>

Received on Tuesday, 26 October 2010 21:11:10 UTC