W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] OfflineAudioContext and ScriptProcessorNodes (#69)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:35 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/69/24244190@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=22723#9) by Chris Rogers on W3C Bugzilla. Fri, 19 Jul 2013 22:18:38 GMT

(In reply to [comment #9](#issuecomment-24244185))
> Another point I wondered about is whether the "back and forth" between the
> audio processing thread and the main thread to process a script node will
> force sub-realtime rendering for offline audio contexts in current browser
> architectures. 
> Currently, it is not unreasonable to expect a delay of 4ms between event
> firing and callback invocation. That limits the number of calls that can be
> made to a script node to 250 calls per second. If a block size of 128 is
> used, that might limit the rate of generating audio samples to 32KHz. The
> longer the event->callback delay, the worse this gets.

Yes, it's true that running the ScriptProcessorNode at size 128 is a performance penalty, so it's probably best to not force the ScriptProcessorNode to process using a buffer size of 128.  I was just saying that *if* we would like the ScriptProcessorNode to have zero in/out latency, then we'll need 128 buffer size.

I"m fairly sure the dire prediction of delay of 4ms is not something we would normally see, based on my experience of the WebKit/Blink code.  I haven't yet had a chance to determine the exact performance hit, but we have nice tracing features in Chrome to get quite a detailed picture.  I'll try some experiments here...

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:33:35 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC