W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2014

Re: Call for Consensus: retire current ScriptProcessorNode design & AudioWorker proposal

From: Srikumar K. S. <srikumarks@gmail.com>
Date: Wed, 13 Aug 2014 06:17:07 +0530
Cc: Joseph Berkovitz <joe@noteflight.com>, Raymond Toy <rtoy@google.com>, Robert O'Callahan <robert@ocallahan.org>, Olivier Thereaux <olivier.thereaux@bbc.co.uk>, Audio WG <public-audio@w3.org>
Message-Id: <C25A2AC3-CDB4-4D58-A777-3E9C01FA3842@gmail.com>
To: Chris Wilson <cwilso@google.com>
Parallelism when running offline audio contexts can be nice to have, but is already available 
today even without having to use worker threads. If I'm mixing down 2 tracks in an offline
context at the end, I can spin off one offline context for each track and mix down the result.
This kind of “split the graph” parallelism would be adequate already for many cases. So I’d
agree with Joe’s stand on this. It doesn’t seem a pressing issue to address (compared to, 
say, timing and synchronization).

I have another question rgd the new worker node proposal. The script node’s buffer lengths
declared in the spec are - 256, 512, 1024, 2048, 4096, 8192, 16384. Whereas the context’s
native chunk length (for k-rate calculations) is 128. If script nodes are being moved into
the audio thread, then is there any reason to not allow a buffer length of 128 as well? This
would put script nodes completely on par with native nodes as far as I can tell and, finally,
let us emulate native nodes using script nodes - which would be a great way to provide
a reference implementation entirely based on script nodes.

An argument raised earlier against sizes of 128 for script nodes (iirc) is that the message rate
would be very high and JS shouldn’t be given such high message rates. With offline 
processing, the API already allows JS code to make use of 100% use of all cpus. It could 
do that by itself anyway with worker threads. So that argument’s moot now and nothing
stands in the way of 128 buffer lengths. 

To me, it would even be acceptable to support *only* 128 since I’m happy to build in any
other buffering I’d need. The buffer length argument of main thread script node was, I believe,
introduced to give some control over potential UI/layout related stuttering by increasing latency. 
This would no longer be necessary with worker nodes since they’ll be running in the same 
thread.

-Kumar


> On 13 Aug 2014, at 2:47 am, Chris Wilson <cwilso@google.com> wrote:
> 
> No, that's what I mean - you CAN exploit multiple cores to handle the work of onaudioprocess callbacks in realtime AudioContexts, it's just that the developer would be responsible for doing a latency-vs-predictably not glitching tradeoff in their implementation.   You'd just insert some latency, to make up for the async lag while you postmessaged the request for processing to your other Worker.  Hopefully it would get back to you before you needed the data, currentTime+latency later. (Sorry, this is much easier to diagram than write.)
> 
> However, this doesn't work at all in offline, because the audio thread will basically run at 100% CPU until it's done; you'd likely get very unpredictable jittering in the async responses.  The only way to do this across cores in offline is have some way to tell the audio system (that's pulling audio data as fast as it can) "I need you to wait for a bit."
> 
> 
> On Tue, Aug 12, 2014 at 1:00 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
> I understand — let me qualify my statement more carefully. I just meant that exploiting multi cores to handle the work of onaudioprocess() callbacks would not be possible in real time, as we’ve stated that these callbacks always occur directly and synchronously in the audio thread, of which there is only one per context.
> 
> I think that what people are getting at is some interest in exploiting parallelism by analyzing the audio graph and determining/declaring parallelizable subgraphs of it. That is the kind of thing I think we should table for now.
> 
> …Joe
> 
> 
> On Aug 12, 2014, at 2:52 PM, Chris Wilson <cwilso@google.com> wrote:
> 
>> On Tue, Aug 12, 2014 at 11:34 AM, Joseph Berkovitz <joe@noteflight.com> wrote:
>> In the meantime I think it would be fine to table the idea of multicore usage by offline audio context until further study can take place. It’s not going to be possible in a real-time audio context either, so this is not outright disadvantaging offline usage. 
>> 
>> Actually, it *is* possible in a real-time context - you would just be responsible for forking a Worker thread and passing the data back and forth (dealing with asynchronicity by buffering latency yourself). 
> 
> 




Received on Wednesday, 13 August 2014 00:47:43 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:14 UTC