Re: [web-audio-api] OfflineAudioContext and ScriptProcessorNodes (#69)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=22723#15) by Ehsan Akhgari [:ehsan] on W3C Bugzilla. Mon, 22 Jul 2013 16:23:03 GMT

(In reply to [comment #15](#issuecomment-24244234))
> (In reply to [comment #7](#issuecomment-24244176))
> > If every node used this "wait for all inputs before running" logic, then
> > script nodes with buffer sizes greater than 128 need not impose a delay in
> > their signal paths. 
> 
> I just realized a subtlety in this. If a script processor node's
> onaudioprocess reads computed values from AudioParams, then the perceived
> k-rate for those AudioParams will be determined by the block size set for
> the script node and not the fixed 128-sample-block in the spec. Not only
> that, it will look like a filter-type script node (with input and output) is
> prescient and anticipates animated AudioParams, because the the
> onaudioprocess will only get to run once enough input chunks have
> accumulated, meaning the values of some of these k-rate AudioParams could
> already have advanced to a time corresponding to the end of the script
> node's buffer duration.

No, according to the spec the implementation must do 128-frame block processing all the time, which means that for example if we have 1024 frames to fill up for a ScriptProcessorNode, we need to call the block processing code 8 times, and each k-rate AudioParam will be sampled at the beginning of each block.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/69#issuecomment-24244239

Received on Wednesday, 11 September 2013 14:34:00 UTC