- From: Ehsan Akhgari <ehsan@mozilla.com>
- Date: Thu, 11 Sep 2014 15:17:53 -0400
- To: Joseph Berkovitz <joe@noteflight.com>
- Cc: Chris Wilson <cwilso@google.com>, "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CANTur_6ugrQqiv2U7mOBND1h3vQ4=fR0zkusESDWNhNYWvXVGA@mail.gmail.com>
On Thu, Sep 11, 2014 at 12:54 PM, Joseph Berkovitz <joe@noteflight.com> wrote: > > Firstly, please note that DOM events have synchronous semantics. But of > course that tells us nothing about when these events get dispatched. As > Chris previously described, his intention is that the audio worker > dispatches these events to all of the worker nodes in the graph > sequentially for each block. That would take away the chance of the UA > running some worker nodes in parallel if the output of neither one is a > direct or indirect input to the others, but it's also a good thing, in that > dispatching these events asynchronously will create latency that we cannot > avoid. That latency is an unfortunate property of the current > ScriptProcessorNode. How do we avoid such latency if we adopt an > asynchronous processing model as you described above? > > > I may be using some language inexpertly here — by saying they are > “asynchronous”, I only mean that AudioProcessingEvents are not dispatched > with any discoverable synchronous relationship to any other events > dispatched to any other audio nodes, or to the main thread. I do not mean > that there is an actual handoff from the audio thread; sorry for any > misunderstanding there. > Oh, then yes, that is my understanding as well. > These events would still be synchronous with respect to other events > dispatched to the node itself — no interleaving of onmessage or > onaudioprocess callbacks. In that sense, the events still have synchronous > semantics in the same sense as DOM events. > I think Chris suggested in his other email that they will also be synchronous in relation to all of the other audio nodes, in that none of the others run at the same time. > Of course, the intention is to implement this mostly as Chris previously > described. However we must avoid any statements like “the UA will invoke > callbacks on all of the worker nodes in the graph sequentially for each > block”. That starts to encourage developers to make assumptions that will > later block optimizations like having multiple audio threads. > The assumptions are going to be made one way or another, which is why I'm interested in not opening anything to interpretation. I think we should try to specify the observable behavior as strictly as we can. And yes, admittedly some of that may cost some optimization opportunities. But as I mentioned before, those optimizations are inherently going to entail latency, which would make these nodes not much more useful than ScriptProcessorNodes. > > > There are definitely use cases for sending arbitrary messages to the > worker. Such messages can contain information such as "the user fired a > gun on the main thread", so that the worker can start outputing a > synthesized gunshot noise in the case of a game, for example. But I > definitely agree that the current postMessage() API is too permissive (it > effectively makes it possible for you to post arbitrary MessagePorts around > on these workers, for example.) > > > I agree with you re the use case — but see my other reply to Jussi. > > > >> 3-5. AudioParam transferrability: I can’t really see the use case for >> AudioParam transferability. AudioParams seem to be the preferred channel >> for communication between the main thread and scripted nodes, and they >> stand alone in supporting that communication. Why would we transfer them in >> a separate message? >> > > As I stated before, what happens when you modify the state of an > AudioParam on the main thread after handing it off to a worker node? At > least, the semantics in that case need to be specified! > > > Absolutely. > > …Joe > -- Ehsan
Received on Thursday, 11 September 2014 19:19:08 UTC