W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Web Audio API sequencer capabilities

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Tue, 21 Aug 2012 20:58:02 +0300
Message-ID: <CAJhzemWNwVRbBt8hKeZUY2Q_OrzPeWS6T3hzM--qQs6RxRoePg@mail.gmail.com>
To: public-audio@w3.org
Hello group,

I've been thinking about how to use the Web Audio API to write a
full-fledged DAW with sequencing capabilities (e.g. MIDI), and I thought
I'd share some thoughts and questions with you.

Currently, it's pretty straight-forward to use the Web Audio API to
schedule events in real time, which means it would play quite well together
with other real time APIs, such as the Web MIDI API. For example, you can
just schedule an audiobuffer to play whenever a noteon event is received
from a MIDI source.

However, here's something of a simple idea of how to build a DAW with a
plugin architecture using the Web Audio API:

 * You have tracks, which may contain audio and sequencing data (e.g. MIDI,
OSC and/or user-defined envelopes). All of these inputs can be either being
recorded from an external source, or be static pieces.

 * You have an effects list for each track, effects being available to pick
from plugins.

 * You have plugins. The plugins are given references to two gain nodes,
one for input and one for output, as well as a reference to the
AudioContext. In response, they will give AudioParam references back to the
host, as well as some information of what the AudioParams stand for,
min/max values and so on. The plugin will set up a sub-graph between the
given gain nodes.

This would be a very basic setup, but with the current API design there are
some hard problems to solve here. The audio is relatively easy, regardless
of whether it's coming from an external source or not. It's just a source
node of some sort. The sequencing part is where stuff gets tricky.

In the plugin models I've used, the sequencing data is paired with the
audio data in processing events, i.e. you're told to fill some buffers,
given a few k-rate params, a few a-rate params and some sequencing events
as well as the input audio data. This makes it very simple to synchronize
the sequencing events with the audio. But with the Web Audio API, the only
place where you get a processing event like this is the JS node, and even
there you currently only get the input audio.

What would be the proposed solution for handling this case? And please, no
setTimeout(). A system is as weak as its weakest link and building a
DAW/Sequencer that relies on setTimeout is going to be utterly unreliable,
which a DAW can't afford to be.

Received on Tuesday, 21 August 2012 17:58:29 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC