- From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
- Date: Fri, 12 Sep 2014 10:14:30 +0300
- To: Joseph Berkovitz <joe@noteflight.com>
- Cc: Chris Wilson <cwilso@google.com>, Norbert Schnell <Norbert.Schnell@ircam.fr>, Ehsan Akhgari <ehsan@mozilla.com>, "public-audio@w3.org" <public-audio@w3.org>
- Message-ID: <CAJhzemXfzkSbnu_2a1xaFmHmUwXJKEOpxn-zv3iRTM-Gdx8t2Q@mail.gmail.com>
On Fri, Sep 12, 2014 at 4:13 AM, Joseph Berkovitz <joe@noteflight.com> wrote: > @Jussi: In the interest of avoiding opinions about how complicated others > imagine the code is vs. what you imagine… could you perhaps provide a code > sample illustrating the API approach that you are talking about, in > particular showing how multiple nodes for a worker partition their working > state? I feel that what needs to be understood by devs is more than just > “how closures work”: it’s how to usefully employ closures to make the API > work. > > @Implementors: I would love to see some commentary from those in the know, > on the inevitability of the importScripts/glitching problem that Jussi is > raising. In particular, I am not sure that initialization of nodes prior to > their entering the started state (where importScripts would likely be > invoked) needs to occur in the audio thread or block it. A node can be > created and scheduled well before it actually starts to handle > AudioProcessingEvents, and this is often the case even in an application > utilizing large numbers of nodes. In the current API design, for example, > audio processing doesn’t ever block waiting for the main thread to do the > (admittedly minimal) work of creating the nodes that do the processing. > I can't answer this as an implementer, but I decided to build a benchmark to gather some data. My benchmark has two tests: 1) Send a message to worker, worker sends back a message as soon as it gets the message. Measure round-trip. [1] 2) Otherwise the same, but worker does importScripts for a ~10kb script. [2] The hypothesis being that there's a latency to the startup of the worker. The tests don't give any insight as to how much time goes to VM context initialization, just to the cost of having 10kb more script. The results are actually surprising only in the difference between Chrome and Firefox. I ran the tests on my 2014 Macbook Pro Retina, and got the following: Chrome, without importScripts Average iteration time: 60.41140500077745ms Median iteration time: 59.61799999931827ms Chrome, with importScripts Average iteration time: 64.66047200083267ms Median iteration time: 62.16400000266731ms Firefox, without importScripts Average iteration time: 3.243685941999998ms Median iteration time: 2.689453999999955ms Firefox, with importScripts Average iteration time: 5.960713365999982ms Median iteration time: 5.2067159999996875ms So having running importScripts for a ~10kb script file amounts to a latency of ~3ms. At 48kHz, it would block the audio thread for ~144 samples (3ms * 48kHz), which at a block size of 128 would amount to more than one block of processing. So every 10kb AudioWorkerNode you were to spawn would cause at least one frame of audio to glitch, for the whole graph. On a state of the art laptop. [1] http://fs.juss.in/worker-latency/ [2] http://fs.juss.in/worker-latency/with-imports/ > …Joe > > > On Sep 11, 2014, at 5:56 PM, Jussi Kalliokoski < > jussi.kalliokoski@gmail.com> wrote: > > On Thu, Sep 11, 2014 at 9:45 PM, Joseph Berkovitz <joe@noteflight.com> > wrote: > >> >> On Sep 11, 2014, at 2:06 PM, Jussi Kalliokoski < >> jussi.kalliokoski@gmail.com> wrote: >> >> On Thu, Sep 11, 2014 at 9:01 PM, Joseph Berkovitz <joe@noteflight.com> >> wrote: >> >>> Jussi, >>> >>> I agree the issue of importScripts overhead could be pretty major, if >>> this overhead is in fact likely to be substantial. I am not knowledgeable >>> about the extent to which browsers optimize imports of the same scripts in >>> multiple workers. However it is almost a certainty that nodes will want to >>> exploit substantial libraries. >>> >>> I did not see a convenient way in your proposed API to make it easy for >>> different nodes based on the same worker to partition their ongoing >>> computational state (maintained in between onaudioprocess callbacks) from >>> each other, though. Did I miss something? Doesn’t there need to be a >>> persistent per-node object that can hold this state? >>> >> >> The state can for example be provided by a closure in the >> onaudionodecreated event handler, or using a global WeakMap that uses the >> AudioNodeHandles as keys. This makes sure that the state associated to the >> node is garbage collected when the node is. >> >> >> I see. Either of the above would work, but it feels like in this model >> the developer needs to do some more careful bookkeeping about state rather >> than just sticking state into the node’s dedicated global scope. That makes >> the approach feel more complex for developers to consume, relative to >> Chris’s proposal or the original ScriptProcessorNode. >> > > What I think we'll be looking at as the majority of people writing code > that runs inside the AudioWorkers will be library authors. As an author of > more libraries than I care to admit, I'd personally rather learn how > closures work than get a barrage of bug reports from people using my code, > complaining that my code is making their applications glitchy (which I get > at the moment, btw), not being able to really do anything about it except > blame the API and bugs qualifying as blockers as WONTFIX. > > The next biggest group, although likely much smaller, will probably be > people doing experimentation. Learning. While they're at it, they'll > probably want to learn the language as well. > > In short, I don't think we have any target audience for the AudioWorker's > that'd have significant trouble understanding how closures work. If we do, > we should probably specify that group and add it to use cases list and > decide on how high the priority for catering to them is. > > >> >> …Joe >> > > >
Received on Friday, 12 September 2014 07:14:59 UTC