Re: [web-audio-api] OfflineAudioContext and ScriptProcessorNodes (#69)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=22723#5) by Chris Rogers on W3C Bugzilla. Fri, 19 Jul 2013 19:28:29 GMT

(In reply to [comment #5](#issuecomment-24244160))
> I accidentally hit "save changes" before I got a chance to read what I'd
> typed. Apologies for the duplication. Here it is, edited -
> 
> --
> I agree with Ehsan here. The only reason it is convenient to have a buffer
> size specifiable for a script processor node is to tweak it to avoid audio
> glitching, since (as Ehsan pointed out) we can always code up any delays we
> require for an application in the onaudioprocess callback. This is not
> useful for offline processing.
> --

I'm just trying to get clarification and more detail on what you mean here, since I'm working on a prototype right now.  Maybe this is already clear to you, but just wanted to make sure...

In the general case, a ScriptProcessorNode has *both* input and output data and acts as a signal processor.  In order to achieve zero latency, the buffer size has to be 128, the same as the rest of the native nodes.  If this were not the case, then the .onaudioprocess handler could not be called until enough input samples (>128) were buffered (for the .inputBuffer attribute of the AudioProcessingEvent), thus introducing a latency.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/69#issuecomment-24244170

Received on Wednesday, 11 September 2013 14:30:01 UTC