Better multi-cpu utilization

Hello,

I'm developing a DAW[1] and would like to see better multi-processor
utilization in the web audio api. This would allow playback of more tracks
simultaneously (a track being one or more source nodes routed through
effects nodes). Although the convolver tail computation is parallel, most
stuff is single-threaded and synchronous which underutilizes the other
processors and makes it sensitive to glitches.

One simple way to solve this with the current api would be to pre-render
each track in an OfflineAudioContext ("freeze"). However, it is an
all-or-nothing approach with long delay and big memory requirements.

Some better suggestions I can think of would be:

1. Being able to render an OfflineAudioContext chunk-wise, e.g.

Promise offlineAudioContext.renderPartial(AudioBuffer buffer)

A lookahead scheduling scheme could be used to regularly (e.g. every
second) pre-render one chunk per track and enqueue it into the main
AudioContext as AudioBufferSourceNodes, provided the playback could be
sample-accurate.

Such a flexible method would also help save memory during offline rendering
for other purposes, perhaps when sending the rendered audio chunk by chunk
over the wire or writing it to disk - without the current limitation of
pre-allocating the memory and having to specify the final duration.

2. Leave it up to the web audio implementation to distribute rendering
across cpus.
The audio graph would consist of "subgraphs" in addition to the "main
graph", connected by asynchronous handoffs. Since the handoff is across
threads, which can take some amount of time, the subgraphs would require a
higher pre-scheduling latency compared to the main graph.

I believe some hinting from the user would help make the distribution
efficient, while preserving low latency for the critical path (e.g. live
audio from getUserMedia or midi input). One way to do this would be a new
node type connecting the subgraphs to the main graph, e.g. SubgraphNode.
This node would imply a higher, perhaps specified, scheduling latency for
anything upstreams, but would otherwise leave the audio unaffected (not
even delayed).

3. Combination of the above - introduce a new a source node for the main
audio context which is the destination of an offline audio context, e.g.
AsyncContextSourceNode.

These are the solutions I could think of - but maybe you have already done
work in this area?

Cheers,
Bjorn

[1] https://www.soundtrap.com

Received on Saturday, 19 April 2014 19:01:16 UTC