Re: Make use of multi-threading in an AudioWorklet (WASM)

Kai,

Multi-threaded real-time low-latency audio is not a particularly simple
problem, especially in the context of graph processing like in the Web
Audio API. Rare are the general purpose audio processing systems running on
"normal" OSes that have solved it (Supernova, jack2, maybe others).

You pinpoint exactly the inherent difficulty in this setup: multi-threading
implies non-determinism, if the threads have different scheduling classes
you have priority inversions, and in any case you need to buffer, etc.

I hope we can find a solution for this at some point. Here are the parts we
need:


   - Re-enable SharedArrayBuffer in implementations (this is coming soon,
   the Web Audio API specification already handles this)
   - Have a way to tell an AudioContext that it should try to run on a
   different thread: for now, it's implicit, and implementation do different
   things, because it's a tradeoff [0]. I'm thinking maybe a flag in the
   AudioContextOptions
   <https://webaudio.github.io/web-audio-api/#dictdef-audiocontextoptions>
   - Write some code to communicate between different threads using the
   usual wait-free ring-buffers etc. (not a spec matter, I think everything we
   need is available)


This would not be a fork-and-join approach, and is not a parallel
processing of the graph, but should enable multi-threaded audio processing
using multiple communicating graphs, so I don't think it's exactly what
you're asking for, but close. This has latency implications, but we're
already using 128 frame buffers IO vector size by default on Firefox on
OSX, so even doubling or tripling this to get a resilient system should be
well under 10ms latency, so it shows this is possible.

Hope this helps, I encourage you to file an issue about this in our V2
issue tracker <https://github.com/WebAudio/web-audio-api-v2/issues>, so
that we can continue discussing with a wider group on the existing
specification gaps we would need to fill to support this use-case.

Thanks,
Paul.

[0] To take two examples deployed today: Chrome has a thread per
AudioContext (I think), better for parallelism, but this implies latency
increase at graph boundaries. Firefox generally (but not always)
multiplexes graphs on a single real-time thread. This is less good for
parallelism, but means that there is zero latency between graphs. Of course
one would have to check if all of this is not linearized on a single thread
somewhere down the line, for example in the OS mixer, browser audio
remoting implementation or something else.

Received on Thursday, 23 April 2020 08:12:05 UTC