W3C home > Mailing lists > Public > public-audio@w3.org > April to June 2013

Re: Testing the Web Audio API

From: Russell McClellan <russell@motu.com>
Date: Fri, 17 May 2013 12:05:51 -0400
Cc: Russell McClellan <russell@motu.com>, Chris Lowis <chris.lowis@bbc.co.uk>, WebAudio <public-audio@w3.org>
Message-Id: <35274B58-F9CF-43E8-86C7-747A05A206C6@motu.com>
To: Ehsan Akhgari <ehsan.akhgari@gmail.com>

On May 17, 2013, at 11:35 AM, Ehsan Akhgari <ehsan.akhgari@gmail.com> wrote:

> Hmm, I'm not sure if I understand your point here.  Where is the thread-safety issue?  The script processor node and the source node are created right after each other, and then the source node is started, so there is no way for the audioprocess event to be dispatched before the source node has started.

My point wasn't that it won't work in Firefox, the point was that it could potentially not work in a spec-conformant implementation.  If the implementation of ScriptProcessorNode double buffers, and runs whether or not an input is attached (iirc, this is how the webkit implementation works), then the following ordering of events would cause the test to break:

1) context is created.  samples start flowing on the audio thread.
2) gain node is created on main thread
3) source node is created on main thread
4) processor is created on main thread
5) on the audio thread, it's determined that processor needs to supply samples.  The processor's callback is called, which creates an event to be handled on the main thread.  the input buffer for this event is set to silence, since at this time the source node has not started.
6) source node is told to "start immediately" on the main thread
7) the event created in (5) is handled on the main thread, causing the test to fail because the input does not contain the required samples.

While I'm certainly not totally sure that the webkit implementation could behave like this, I don't see anything in the specification that disallows this sort of behavior.

> Each audioprocess event will be dispatched only after the input nodes have produced the specified buffer size, which means that if you have one ScriptProcessorNode feeding into another, which is testing its event's inputBuffer to see what the previous node has generated, the audioprocess event on the second node cannot be dispatched until the event on the first node has been dispatched (assuming they're both using the same buffer size, of course.)

This isn't how it works in WebKit, I don't think.  ScriptProcessorNodes run whether or not there is input attached.  Otherwise, how would you be able to use ScriptProcessors as signal generators?

Thanks,
-Russell
Received on Friday, 17 May 2013 16:06:26 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:18 UTC