Re: [Minutes] TPAC 6-7 November 2017 SF

Hi Audio Working Group,

Interesting to see the work on AudioWorkletProcessor with WASM code !

I've just finished today a test page for Faust + WASM here: http://faust.grame.fr/dynamic/faustlive-wasm-worklet.html. It can be tested by dragging the Faust DSP code located here http://faust.grame.fr/modules/. Drag the « FAUST » button which contains the DSP URL on the drop zone in the faustlive-wasm-worklet.html page.

The used method here is to dynamical create the AudioWorkletProcessor class as a string, with the FAUST dynamically produced WASM code ‘injected’ in the class as an Uint8Array with WASM code as bytes. Then this string is converted as an URL with  'window.URL.createObjectURL(new Blob([xxxx])’, and finally used in addModule(url) call. 

Look at http://faust.grame.fr/dynamic/webaudio-wasm-worklet-wrapper.js, and the mydspProcessorString.

(well this is quite ‘laborious’ and probably not very efficient, but it works…)

Stéphane 

> Le 8 nov. 2017 à 17:00, Joe Berkovitz <joe@noteflight.com> a écrit :
> 
> Hi all,
> 
> It's been a great meeting this year at TPAC. Here is a rough summary of the ground we covered (at least, the part for which I was able to be present).
> 
> - Our CR transition request is in progress and a final review has been requested from the TAG.
> 
> - Further work on populating the test suite was planned.
> 
> - All v.next milestone issues have been reviewed. A rough cut at a set of high priority issues was identified by the group. Also, some issues were clarified and/or closed during this review.
> 
> - We have three remaining issues with the spec language for AudioWorklet and its friends which we expect to close within the week:
> 
>    -- #1435 is the most substantive of the three issues. It relates to the initialization sequence for AudioWorkletGlobalScopes and the question of the relationship of such scopes to their owning AudioContexts. We conferred with Ian Kilpatrick and after much discussion arrived at the conclusion that global scopes must be created in a way that unambiguously identifies the owning context. In this way, each addModule() call is specific to a context, and results in a promise that resolves only after all processor classes have been registered for that context. This requires that we move the audioWorklet attribute out of Window and into BaseAudioContext.
> 
>   -- #1441 is a minor typo.
> 
>   -- #1442 is an omission of the serialization/passthrough of a new AudioWorkletNode's options dictionary, which was supposed to be communicated to the associated AudioWorkletProcessor. It is easy to add the missing language.
> 
> - We met with the WebAssembly CG to discuss how to best support AudioWorkletProcessor with WASM code. The discussion had several facets:
> 
>    -- Both groups affirmed that WebAssembly and AudioWorklet together have a very strong set of combined use cases.
> 
>    -- A demo of FM synthesis using WASM and the current Chrome Canary AudioWorklet implementation was shown. The integration wasn't an example of best practices but it showed feasibility.
> 
>    -- We discussed the need for a clean way to load WASM modules directly into AudioWorklets via addModule(). Perhaps these could register their own processors through Web Audio APIs exported for visibility within WASM. However the JS-class-based approach in the current AudioWorkletProcessor may be a poor fit.
> 
>    -- A shorter-term approach to surfacing WASM in AudioWorklets is to construct a Module on the main thread and serialize it through the AudioWorkletNode at construction time, instantiating it on the audio thread side if it doesn't already exist there. The serialization and instantiation of modules are expected to be lightweight operations.
> 
>    -- The current approach in which the audio thread owns the array buffers used to communicate with an AudioWorkletProcessor imposes the need for unnecessary copying of data, since those buffers can't be directly seen by WASM. We talked about an alternative approach in which WASM (or perhaps, even JS processors) provisions the buffers to the engine itself. This requires an alternative protocol in which the engine states its needs and the processor fulfills them, so it will require a fair bit of thinking together.
> 
>    -- We would like to follow up with further joint Web Audio / WebAssembly meetings preceded by a call for concrete proposals on the above points.
> 
> Let me know if I omitted anything. And thanks again everyone for a great TPAC. 
> 
> Best,
> 
> .            .       .    .  . ...Joe
> 
> Joe Berkovitz
> Founder
> Noteflight LLC
> 
> 49R Day Street
> Somerville MA 02144
> USA
> 
> "Bring music to life"
> www.noteflight.com

Received on Wednesday, 8 November 2017 16:51:17 UTC