Re: [web-audio-api] (JSWorkers): ScriptProcessorNode processing in workers (#113)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#88) by Jussi Kalliokoski on W3C Bugzilla. Tue, 31 Jul 2012 07:52:15 GMT

(In reply to [comment #88](#issuecomment-24244792))
> (In reply to [comment #87](#issuecomment-24244784))
> > (In reply to [comment #85](#issuecomment-24244774))
> > > I think there's been a misunderstanding that somehow the JavaScript code
> > > rendering audio in a JavaScriptAudioNode callback will block the audio thread! 
> > > This is not the case.  An implementation should use buffering
> > > (producer/consumer model) where the JS thread produces and the audio thread
> > > consumes (with no blocking).  This is how it's implemented in WebKit.
> > 
> > How does this work in a subgraph similar to this?:
> > 
> > +------------+      +---------------------+      +------------------+
> > | SourceNode |----->| JavaScriptAudioNode |----->| BiquadFilterNode |
> > +------------+      +---------------------+   +->|                  |
> >                                               |  +------------------+
> > +------------+      +---------------------+   |
> > | SourceNode |----->|    AudioGainNode    |---+
> > +------------+      +---------------------+
> > 
> > (hope this ASCII art works)
> > 
> > I assume that without the input from the SourceNode, the JavaScriptAudioNode
> > will not be able to produce anything (hence its callback will not be fired
> > until enough data is available), and likewise the BiquadFilterNode can not
> > produce any sound until data is available from both the JavaScriptAudioNode and
> > the AudioGainNode.
> > 
> > In other words, if the JavaScriptAudioNode callback in the main thread is
> > delayed by a setInterval event, for instance, i guess that at least the
> > BiquadFilterNode (and all nodes following it?) will need to halt until the JS
> > callback gets fired and finished so that it has produced the necessary data for
> > the graph to continue?
> 
> No, this is not the case.  We're talking about a real-time system with an audio
> thread having realtime priority with time-constraints.  In real-time systems
> it's very bad to block in a realtime audio thread.  In fact no blocking calls
> are allowed in our WebKit implementation, including the taking of any locks. 
> This is how pro-audio systems work.  In your scenario, if the main thread is
> delayed as you describe then there will simply be a glitch due to buffer
> underrun in the JavaScriptAudioNode, but the other graph processing nodes which
> are native will continue processing smoothly.  Obviously the glitch from the
> JavaScriptAudioNode is bad, but we already know that this can be possible due
> to things such as setInterval(), GC, etc.  In fact, it's one of the first
> things I described in some detail in my spec document over two years ago. 
> Choosing larger buffer sizes for the JavaScriptAudioNode can help alleviate
> this problem.

Hmm? Convolution with big kernels is just as, if not more, suspectible to glitch as a JS node, so do you mean that if any of the nodes fails to deliver, the others still keep going?

It seems to me that the current behavior in the WebKit implementation is that if the buffer fill stops happening in time, it will start looping the previous buffer, whereas when a convolution node fails to deliver, it's just glitch and jump all over the place. Is this correct?

Seems a bit weird to treat parts of the graph differently, but I think I might have misunderstood something.

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/113#issuecomment-24244798

Received on Wednesday, 11 September 2013 14:39:30 UTC