- From: Chris Rogers <crogers@google.com>
- Date: Wed, 16 Nov 2011 16:15:48 -0800
- To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
- Cc: public-audio@w3.org
- Message-ID: <CA+EzO0nP3HgtOp22tuC=SvdJ89t7SUjR04RJiFP1X25XwLmPWA@mail.gmail.com>
On Wed, Nov 16, 2011 at 3:15 PM, Jussi Kalliokoski < jussi.kalliokoski@gmail.com> wrote: > Hello folks, > > Just thought I'd address a bit of hot topic on my mind right now, that > being the performance of full/partial JS audio in Web Audio API. Correct me > if I'm wrong, but currently, if you add a JavaScriptProcessingNode to a > graph, you'll basically expose the whole graph to the performance problems > listed on the specification [1], that the high level API is trying to > shield you from. In my experience, if you're careful with garbage > collection and JS engines keep getting faster, one problem will be > emphasized: the audio processing JS is running in the same thread with any > UI operations, blocking XHRs and whatnot, and thus is very unstable. This > is usually to be avoided in any kind of audio processing, so I've come up > with two alternative or possibly coexisting solutions for that problem: > > 1) Instead of a callback, you could pass a Web Worker to the > createJavaScriptProcessingNode function (Similarily to Robert's > MediaStreamProcessing API). This worker would have an event called > "onaudioprocess", and the event would get fired with the same arguments > that the JavaScriptProcessingNode's "onaudioprocess" event. > > 2) Expose the AudioContext API to workers as well. > > How do these propositions sound? > Hi Jussi, I've been thinking about (1) as well and believe it should be doable. (2) is a lot more complex, so I'm not sure about that one Chris
Received on Thursday, 17 November 2011 00:16:25 UTC