- From: Olivier Thereaux <notifications@github.com>
- Date: Wed, 11 Sep 2013 07:29:39 -0700
- To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
- Message-ID: <WebAudio/web-audio-api/issues/69/24244251@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=22723#17) by Ehsan Akhgari [:ehsan] on W3C Bugzilla. Mon, 22 Jul 2013 17:38:57 GMT (In reply to [comment #17](#issuecomment-24244243)) > (In reply to [comment #16](#issuecomment-24244239)) > > (In reply to [comment #15](#issuecomment-24244234)) > > > (In reply to [comment #7](#issuecomment-24244176)) > > > > If every node used this "wait for all inputs before running" logic, then > > > > script nodes with buffer sizes greater than 128 need not impose a delay in > > > > their signal paths. > > > > > > I just realized a subtlety in this. If a script processor node's > > > onaudioprocess reads computed values from AudioParams, then the perceived > > > k-rate for those AudioParams will be determined by the block size set for > > > the script node and not the fixed 128-sample-block in the spec. Not only > > > that, it will look like a filter-type script node (with input and output) is > > > prescient and anticipates animated AudioParams, because the the > > > onaudioprocess will only get to run once enough input chunks have > > > accumulated, meaning the values of some of these k-rate AudioParams could > > > already have advanced to a time corresponding to the end of the script > > > node's buffer duration. > > > > No, according to the spec the implementation must do 128-frame block > > processing all the time, which means that for example if we have 1024 frames > > to fill up for a ScriptProcessorNode, we need to call the block processing > > code 8 times, and each k-rate AudioParam will be sampled at the beginning of > > each block. > > That holds only for the native nodes, doesn't it? No, that's true for all nodes. > With the real-time > context, script processor nodes with buffer sizes > 128 (which is all the > time) already have a lower k-rate than the native nodes if they read > computed values of AudioParams within their onaudioprocess callbacks. I'm not sure what you mean here. How do you "sample" the AudioParam value inside the audioprocess event handler? > Anyway, to ensure that the k-rate is uniform at least during offline > processing, it looks like the only way is to raise onaudioprocess events for > each 128-sample-frame block. The event dispatcher better put up some > performance :) Doing that violates the current spec, and I think would be a very bad idea. --- Reply to this email directly or view it on GitHub: https://github.com/WebAudio/web-audio-api/issues/69#issuecomment-24244251
Received on Wednesday, 11 September 2013 14:34:26 UTC