Re: Call for Consensus: retire current ScriptProcessorNode design & AudioWorker proposal

No, that's what I mean - you CAN exploit multiple cores to handle the work
of onaudioprocess callbacks in realtime AudioContexts, it's just that the
developer would be responsible for doing a latency-vs-predictably not
glitching tradeoff in their implementation.   You'd just insert some
latency, to make up for the async lag while you postmessaged the request
for processing to your other Worker.  Hopefully it would get back to you
before you needed the data, currentTime+latency later. (Sorry, this is much
easier to diagram than write.)

However, this doesn't work at all in offline, because the audio thread will
basically run at 100% CPU until it's done; you'd likely get very
unpredictable jittering in the async responses.  The only way to do this
across cores in offline is have some way to tell the audio system (that's
pulling audio data as fast as it can) "I need you to wait for a bit."


On Tue, Aug 12, 2014 at 1:00 PM, Joseph Berkovitz <joe@noteflight.com>
wrote:

> I understand — let me qualify my statement more carefully. I just meant
> that exploiting multi cores to handle the work of onaudioprocess()
> callbacks would not be possible in real time, as we’ve stated that these
> callbacks always occur directly and synchronously in the audio thread, of
> which there is only one per context.
>
> I think that what people are getting at is some interest in exploiting
> parallelism by analyzing the audio graph and determining/declaring
> parallelizable subgraphs of it. That is the kind of thing I think we should
> table for now.
>
> …Joe
>
>
> On Aug 12, 2014, at 2:52 PM, Chris Wilson <cwilso@google.com> wrote:
>
> On Tue, Aug 12, 2014 at 11:34 AM, Joseph Berkovitz <joe@noteflight.com>
> wrote:
>
>> In the meantime I think it would be fine to table the idea of multicore
>> usage by offline audio context until further study can take place. It’s not
>> going to be possible in a real-time audio context either, so this is not
>> outright disadvantaging offline usage.
>>
>
> Actually, it *is* possible in a real-time context - you would just be
> responsible for forking a Worker thread and passing the data back and forth
> (dealing with asynchronicity by buffering latency yourself).
>
>
>

Received on Tuesday, 12 August 2014 21:18:21 UTC