Re: [web-audio-api] OfflineAudioContext and ScriptProcessorNodes (#69)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=22723#18) by Chris Rogers on W3C Bugzilla. Tue, 23 Jul 2013 22:20:12 GMT

(In reply to [comment #18](#issuecomment-24244251))
> (In reply to [comment #17](#issuecomment-24244243))
> > (In reply to [comment #16](#issuecomment-24244239))
> > > (In reply to [comment #15](#issuecomment-24244234))
> > > > (In reply to [comment #7](#issuecomment-24244176))
> > > > > If every node used this "wait for all inputs before running" logic, then
> > > > > script nodes with buffer sizes greater than 128 need not impose a delay in
> > > > > their signal paths. 
> > > > 
> > > > I just realized a subtlety in this. If a script processor node's
> > > > onaudioprocess reads computed values from AudioParams, then the perceived
> > > > k-rate for those AudioParams will be determined by the block size set for
> > > > the script node and not the fixed 128-sample-block in the spec. Not only
> > > > that, it will look like a filter-type script node (with input and output) is
> > > > prescient and anticipates animated AudioParams, because the the
> > > > onaudioprocess will only get to run once enough input chunks have
> > > > accumulated, meaning the values of some of these k-rate AudioParams could
> > > > already have advanced to a time corresponding to the end of the script
> > > > node's buffer duration.
> > > 
> > > No, according to the spec the implementation must do 128-frame block
> > > processing all the time, which means that for example if we have 1024 frames
> > > to fill up for a ScriptProcessorNode, we need to call the block processing
> > > code 8 times, and each k-rate AudioParam will be sampled at the beginning of
> > > each block.
> > 
> > That holds only for the native nodes, doesn't it?
> 
> No, that's true for all nodes.
> 
> > With the real-time
> > context, script processor nodes with buffer sizes > 128 (which is all the
> > time) already have a lower k-rate than the native nodes if they read
> > computed values of AudioParams within their onaudioprocess callbacks.
> 
> I'm not sure what you mean here.  How do you "sample" the AudioParam value
> inside the audioprocess event handler?
> 
> > Anyway, to ensure that the k-rate is uniform at least during offline
> > processing, it looks like the only way is to raise onaudioprocess events for
> > each 128-sample-frame block. The event dispatcher better put up some
> > performance :)
> 
> Doing that violates the current spec, and I think would be a very bad idea.

I agree that we shouldn't force it to run at 128.  But I think we should change the current spec to allow for size 128, especially for the OfflineAudioContext case.  Right now, the minimum size is 256, and we would like to get zero latency for this offline case.

I've created an early prototype in Chrome which synchronizes the audio thread with the main thread, and runs at 128.  I've found that the average time between 
onaudioprocess callbacks is around 50microseconds.  I tried a really simple test case:

AudioBufferSourceNode -> ScriptProcessorNode -> OfflineAudioContext.destination

and processed during a time period of several minutes long.  On a mid-range Mac pro I saw around 60x real-time performance.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/69#issuecomment-24244254

Received on Wednesday, 11 September 2013 14:30:14 UTC