Re: Call for Consensus: retire current ScriptProcessorNode design & AudioWorker proposal

Another clarification, since I seem not to be getting my point across, which I’m sure is my fault:

Any developer can *always* exploit multiple cores in the realtime case to parallelize audio processing by using postMessage() and exploiting workers. I understand that. And of course you’d have to play some latency tricks as you suggest.

All that I am saying is that Web Audio should not try parallelize processing all by itself, by trying to use multiple audio threads. I believe that some people are implicitly asking for that. This is the idea that I think should be set aside.

The offline issue is just as you describe it. Some way is required to pause or throttle the audio engine.

…Joe


On Aug 12, 2014, at 5:17 PM, Chris Wilson <cwilso@google.com> wrote:

> No, that's what I mean - you CAN exploit multiple cores to handle the work of onaudioprocess callbacks in realtime AudioContexts, it's just that the developer would be responsible for doing a latency-vs-predictably not glitching tradeoff in their implementation.   You'd just insert some latency, to make up for the async lag while you postmessaged the request for processing to your other Worker.  Hopefully it would get back to you before you needed the data, currentTime+latency later. (Sorry, this is much easier to diagram than write.)
> 
> However, this doesn't work at all in offline, because the audio thread will basically run at 100% CPU until it's done; you'd likely get very unpredictable jittering in the async responses.  The only way to do this across cores in offline is have some way to tell the audio system (that's pulling audio data as fast as it can) "I need you to wait for a bit."
> 
> 
> On Tue, Aug 12, 2014 at 1:00 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
> I understand — let me qualify my statement more carefully. I just meant that exploiting multi cores to handle the work of onaudioprocess() callbacks would not be possible in real time, as we’ve stated that these callbacks always occur directly and synchronously in the audio thread, of which there is only one per context.
> 
> I think that what people are getting at is some interest in exploiting parallelism by analyzing the audio graph and determining/declaring parallelizable subgraphs of it. That is the kind of thing I think we should table for now.
> 
> …Joe
> 
> 
> On Aug 12, 2014, at 2:52 PM, Chris Wilson <cwilso@google.com> wrote:
> 
>> On Tue, Aug 12, 2014 at 11:34 AM, Joseph Berkovitz <joe@noteflight.com> wrote:
>> In the meantime I think it would be fine to table the idea of multicore usage by offline audio context until further study can take place. It’s not going to be possible in a real-time audio context either, so this is not outright disadvantaging offline usage. 
>> 
>> Actually, it *is* possible in a real-time context - you would just be responsible for forking a Worker thread and passing the data back and forth (dealing with asynchronicity by buffering latency yourself). 
> 
> 

.            .       .    .  . ...Joe

Joe Berkovitz
President

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271
www.noteflight.com
"Your music, everywhere"

Received on Wednesday, 13 August 2014 13:34:24 UTC