W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] (JSWorkers): ScriptProcessorNode processing in workers (#113)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:30:23 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/113/24244792@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#87) by Chris Rogers on W3C Bugzilla. Tue, 31 Jul 2012 06:20:54 GMT

(In reply to [comment #87](#issuecomment-24244784))
> (In reply to [comment #85](#issuecomment-24244774))
> > I think there's been a misunderstanding that somehow the JavaScript code
> > rendering audio in a JavaScriptAudioNode callback will block the audio thread! 
> > This is not the case.  An implementation should use buffering
> > (producer/consumer model) where the JS thread produces and the audio thread
> > consumes (with no blocking).  This is how it's implemented in WebKit.
> 
> How does this work in a subgraph similar to this?:
> 
> +------------+      +---------------------+      +------------------+
> | SourceNode |----->| JavaScriptAudioNode |----->| BiquadFilterNode |
> +------------+      +---------------------+   +->|                  |
>                                               |  +------------------+
> +------------+      +---------------------+   |
> | SourceNode |----->|    AudioGainNode    |---+
> +------------+      +---------------------+
> 
> (hope this ASCII art works)
> 
> I assume that without the input from the SourceNode, the JavaScriptAudioNode
> will not be able to produce anything (hence its callback will not be fired
> until enough data is available), and likewise the BiquadFilterNode can not
> produce any sound until data is available from both the JavaScriptAudioNode and
> the AudioGainNode.
> 
> In other words, if the JavaScriptAudioNode callback in the main thread is
> delayed by a setInterval event, for instance, i guess that at least the
> BiquadFilterNode (and all nodes following it?) will need to halt until the JS
> callback gets fired and finished so that it has produced the necessary data for
> the graph to continue?

No, this is not the case.  We're talking about a real-time system with an audio thread having realtime priority with time-constraints.  In real-time systems it's very bad to block in a realtime audio thread.  In fact no blocking calls are allowed in our WebKit implementation, including the taking of any locks.  This is how pro-audio systems work.  In your scenario, if the main thread is delayed as you describe then there will simply be a glitch due to buffer underrun in the JavaScriptAudioNode, but the other graph processing nodes which are native will continue processing smoothly.  Obviously the glitch from the JavaScriptAudioNode is bad, but we already know that this can be possible due to things such as setInterval(), GC, etc.  In fact, it's one of the first things I described in some detail in my spec document over two years ago.  Choosing larger buffer sizes for the JavaScriptAudioNode can help alleviate this problem.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-24244792
Received on Wednesday, 11 September 2013 14:39:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC