- From: Dmitry Lomov <dslomov@google.com>
- Date: Fri, 16 Mar 2012 06:16:28 -0700
- To: robert@ocallahan.org
- Cc: Chris Rogers <crogers@google.com>, Alistair MacDonald <al@signedon.com>, public-audio@w3.org
- Message-ID: <CAObu7DHFTzs-Duk7zkOo=_O1EfaFZAw285w-cBBLB-e_kxMGeQ@mail.gmail.com>
(sorry for late reply - I somehow missed this) On Thu, Mar 8, 2012 at 4:10 PM, Robert O'Callahan <robert@ocallahan.org>wrote: > On Thu, Mar 1, 2012 at 9:49 PM, Dmitry Lomov <dslomov@google.com> wrote: > >> On Wed, Feb 29, 2012 at 3:23 PM, Robert O'Callahan <robert@ocallahan.org>wrote: >> >>> The writeAudio method still does a copy. Maybe that copy can be avoided >>> with an alternative design using transfers, but simply preallocating an >>> output buffer as Dmitry suggested isn't quite the right solution since an >>> onprocessmedia handler can output any amount of audio, including none. >>> Instead we probably should have writeAudio take ownership of the buffer and >>> neuter it, or alternatively keep the current copy semantics for writeAudio >>> and have a separate transferAudio method that uses the transfer semantics. >>> >>> >> So these two proposals handle transfer-out but not transfer-in. As you >> say, it will either be a copy in a writeAudio, or, if we have a >> transferAudio method, an allocation on every callback (because you cannot >> reuse a transferred-out array buffer). >> If event has a perallocated output buffer, we will avoid any copies and >> allocations. >> > > The allocation overhead can be practically eliminated by recycling buffers > internally in the browser. > This assumes a lot about how the array buffers are implemented in the UA. Note that unless the GC happens for the worker the browser wouldn't even know that array buffer can be recycled. > The preallocated output buffer approach still has the problem of not > handling situations where the processing engine isn't outputting the same > amount of data that was input. > Right - so we should have separate input and output buffers. My understanding is that in the typical case the processor outputs the same amount of output per each input? In that case, here is what we can do: self.onprocessmedia = function (event) { var len = event.audioLength; if (!event.outputBuffer || output.outputBuffer.length != len) { output.outputBuffer = new Float32Array(len); } ... } I.e. event has an outputBuffer property that the processor allocates once and then it is transferred in and out. If the processor outputs variable-length outputs, we can have an adjustable limit: self.onprocessmedia = function (event) { var len = event.audioLength; if (!event.outputBuffer || output.outputBuffer.length < len) { output.outputBuffer = new Float32Array(len); } event.setOuputSampleLength(len); ... } > > >> >>> >>> Both current proposals suggest that any worker can handle only one >>>> JavaScriptAudioNode/ProcessedMediaStream (since the pocessing nodes run >>>> a >>>> predefined callback on the worker). >>>> >>> >>> This is definitely not the case for ProcessedMediaStream. There is no >>> problem with using the same Worker for several ProcessedMediaStreams. If >>> you want to have different types of processing performed in the same >>> Worker, you could use a field of the 'params' object in each onprocessmedia >>> event to distinguish them. >>> >> >> so you would have something like: >> self.onprocessmediaevent = function(event) { >> if (event,param.routine == "sinGenerator") { ... } >> else if (event.param.routine == "fobarEffect") { ... } >> .... >> } >> >> so you would basically have a dispatch in JavaScript? >> This has several drawback: >> 1) dispatch in JavaScript takes precious cycles that might have been used >> processing audio >> 2) methods structured like this are harder to optimize by JavaScript >> engine JIT >> 3) this requires param to have a value and be transferred to worker on >> every call - this takes valuable cylces. >> 4) the code looks ugly :) >> > > I'd write it like this: > var dispatchTable = { > sinGenerator: function(event) { > ... > }, > foobarEffect: function(event) { > ... > } > }; > self.onprocessmedia = function(event) { > dispatchTable[event.params.type](event); > }; > ... > which reduces those drawbacks. I expect the overheads introduced are > negligible. > The dynamic dispatch call dispatchTable[event.params.type](event) might be pretty expensive, esp compared with simple array operations that processors usually are. > > My proposal, that lets you specify a callback name, solves all of that. >> Moreover, I can see other benefits: >> a. no need to add extra methods to WorkerContext.idl >> b. ease of code use and re-use. The user can package her library of audio >> effects, where all audio effects have different names but conform to the >> same signature, and then easily import those into worker and call them - >> no need for any boilerplate in the worker code itself. >> > > On the other hand, it reduces isolation of the worker since it gives page > script the ability to call any global function in the worker instead of > going through predefined entry points (onmessage, onprocessmedia). I > suspect that's not a good idea. > I do not think the isolation here is a big deal: the worker code as well as the page code is fully under control of the web page author (there is a same-origin restriction for workers etc etc). It's also more work to implement, for a case that maybe doesn't even matter > that much. > I feel this is where our friction point is: I think ease of writing reusable effect libraries and being able to easily control worker allocation is an important use case. > > >> >>> However, I think we should not encourage authors to multiplex processing >>> nodes onto Workers by hand, since the optimal assignment could depend on >>> the details of the browser and platform (e.g. how heavyweight Workers are, >>> and how many cores are available to the browser). >>> >> >> It looks like a simple UA-independent guideline that can be given is this: >> - if your audio processing nodes are sequential in the graph, dispatch >> them onto the same worker >> - if they are parallel, dispatch them on different workers. >> > > That's good but it may not be optimal. In some situations you might get > better throughput by placing sequential nodes on different cores. > I guess this will be very UA dependent. I suspect the rough guidelines above will be a great starting point. > > >> Instead of making params a field of event object, maybe it will be cleaner >>>> to make params an extra argument to callback, and also pass initial >>>> value >>>> on node/stream construction? >>>> >>> >>> I don't understand the benefits of this. Having an event callback with a >>> single event object parameter is a standard pattern for Web APIs and I >>> don't see a need to deviate from it here. >>> >> >> This just makes for nicer looking and more understandable code, both on >> node creation and in processing routine. >> > > OK, it's not a big deal one way or another. > > My biggest problem with the params approach right now is this... It's > natural to write: > stream.params = {x:1, y:2}; > stream.params.x = 3; > Unfortunately the second statement does not trigger any change in the > worker, because only assignments to the params attribute itself trigger > structured cloning and dispatch to the worker. I haven't thought of a > reasonable fix for this yet. Any suggestions appreciated. > > Rob > -- > “You have heard that it was said, ‘Love your neighbor and hate your > enemy.’ But I tell you, love your enemies and pray for those who persecute > you, that you may be children of your Father in heaven. ... If you love > those who love you, what reward will you get? Are not even the tax > collectors doing that? And if you greet only your own people, what are you > doing more than others?" [Matthew 5:43-47] > >
Received on Friday, 16 March 2012 13:17:04 UTC