Re: Audio Workers - please review

> 
> Firstly, please note that DOM events have synchronous semantics.  But of course that tells us nothing about when these events get dispatched.  As Chris previously described, his intention is that the audio worker dispatches these events to all of the worker nodes in the graph sequentially for each block.  That would take away the chance of the UA running some worker nodes in parallel if the output of neither one is a direct or indirect input to the others, but it's also a good thing, in that dispatching these events asynchronously will create latency that we cannot avoid.  That latency is an unfortunate property of the current ScriptProcessorNode.  How do we avoid such latency if we adopt an asynchronous processing model as you described above?

I may be using some language inexpertly here — by saying they are “asynchronous”, I only mean that AudioProcessingEvents are not dispatched with any discoverable synchronous relationship to any other events dispatched to any other audio nodes, or to the main thread. I do not mean that there is an actual handoff from the audio thread; sorry for any misunderstanding there.

These events would still be synchronous with respect to other events dispatched to the node itself — no interleaving of onmessage or onaudioprocess callbacks. In that sense, the events still have synchronous semantics in the same sense as DOM events.

Of course, the intention is to implement this mostly as Chris previously described. However we must avoid any statements like “the UA will invoke callbacks on all of the worker nodes in the graph sequentially for each block”. That starts to encourage developers to make assumptions that will later block optimizations like having multiple audio threads.

> 
> 
> There are definitely use cases for sending arbitrary messages to the worker.  Such messages can contain information such as "the user fired a gun on the main thread", so that the worker can start outputing a synthesized gunshot noise in the case of a game, for example.  But I definitely agree that the current postMessage() API is too permissive (it effectively makes it possible for you to post arbitrary MessagePorts around on these workers, for example.)

I agree with you re the use case — but see my other reply to Jussi.

>  
> 3-5. AudioParam transferrability: I can’t really see the use case for AudioParam transferability. AudioParams seem to be the preferred channel for communication between the main thread and scripted nodes, and they stand alone in supporting that communication. Why would we transfer them in a separate message?
> 
> As I stated before, what happens when you modify the state of an AudioParam on the main thread after handing it off to a worker node?  At least, the semantics in that case need to be specified!

Absolutely.

…Joe

Received on Thursday, 11 September 2014 16:55:09 UTC