Re: [web-audio-api] Worker-based ScriptProcessorNode (#113)

Hi Norbert,

I believe that the async use case (2) is indirectly supported by Chris’s proposal, because AudioWorker scripts are able to pass arbitrary data asynchronously to and from the main thread using postMessage(). The main thread can then delegate work described by those messages to other Web Workers that are carrying out longer-term audio processing, and pass their results back to the original AudioWorker via further postMessage() invocations.

In HTML5 environments supporting SharedWorkers and/or MessageChannels, it should furthermore be possible to have direct communication between AudioWorkers and other workers, but the above as a minimum ought to suffice to avoid the goal of “not interrupting anything else”.


On Aug 11, 2014, at 12:00 PM, Norbert Schnell <notifications@github.com> wrote:
> However, concerning the synchronous requirement (1.), I cannot stop dreaming of a "shader-like" solution that would have everything to be heavily JIT compiled. Cycling 74' "gen~" in Max/MSP (http://cycling74.com/products/gen/) and GRAME's Faust (http://faust.grame.fr/) are two examples of possible formalisms. Apart from the possibility of radical optimization, the big advantage of such an approach would be that also the syntax can be well adapted to the task(s) of audio processing. Especially, Faust is a very good example of how a community of DSP specialists can adopt a formalism to contribute their knowledge to an open community of programmers – many of which just practice copy-and-pasting of free code snippets. I would like to have these specialists around Web Audio API.
> 
> 

Just my opinion about priorities: I personally feel that shader-like solutions may be valuable, but that they should be implemented as another flavor of audio node, different from the proposed one for script processing. We need to provide a pure JS-based processing paradigm first, to fix the API, before researching and settling on other solutions. Blending the shader language debate into this discussion will hold up the solution further.

> On a more concrete note, I would have a little idea for the "AudioWorker" interface: What about renaming "playbackTime" (anyway a weird name) to "inputTime" and add another value named "outputTime" for the (envisaged :-) time when the produced frames would meet again the rest of the Web Audio graph (probably playbackTime + latency). Wouldn't that make things even clearer?
> 
> 
I think this question may perhaps be due to a confusion about the nature of AudioWorker. Its produced frames "meet the rest of the audio graph” immediately, just like a native node’s output, as its work is done directly in the audio thread.

.            .       .    .  . ...Joe

Joe Berkovitz
President

Noteflight LLC
Boston, Mass.
phone: +1 978 314 6271
www.noteflight.com
"Your music, everywhere"

Received on Monday, 11 August 2014 16:43:53 UTC