Re: Web Audio API sequencer capabilities

I forgot a few things I had in mind.

Usually in a DAW, if you press pause, effects such as reverbs etc. will
still play their tails. This is somewhat the default behavior with the Web
Audio API, and when plugins would receive a pause event they could just
call cancelScheduledValues on AudioParams. However, if you press stop (or
panic in some cases), the audio gets cut off immediately, and all the
active notes in virtual instruments get killed (the latter is true with
pause as well). How would one replicate this behavior with the Web Audio
API? One could just drop the connection to the audio destination, but
AFAICT then when the playback starts again, the tails would get played on
top of that, which would most likely be undesirable.

My suggestion for this is that the native nodes would have a reset() method
(not attached to the name) that would make them forget clear their internal
buffers, i.e. cancel all the echo/reverb tails and reset the internal state
of filters and an oscillator's phase.

Talking about Oscillators brings me to another subject. As the current
source nodes that are useful in synthesis (oscillator and buffer) are
designed to be simple shot only, the idea of the plugins exposing desired
AudioParams doesn't work anymore. Now this is OK, because the idea is
actually flawed from another perspective as well, as it may be that the
host is scheduling value changes to the params simultaneously with the
plugin, which would probably lead to some very unexpected results (from the
POV of the DAW user). This can be worked around by exposing a gain node
that pipes values to the AudioParam(s) instead. But now the host loses the
ability to efficiently schedule values / envelopes to the parameters.

Suggestion: We separate the envelope / scheduling behavior from the
AudioParams to a new native source node, something like an EnvelopeNode.

Other benefits:
 * You could have multiple envelopes for a single AudioParam.
 * You could have a single envelope for multiple AudioParams.
 * AudioParam would be a whole lot simpler an interface.
 * The need for a node that produces a certain offset would be solved.
 * Probably a lot more.

The only downside I can think of is that it's a breaking change. It's a big
one, but then again, this is a working draft with only one working
independent implementation anyway.

Cheers,
Jussi

On Tue, Aug 21, 2012 at 8:58 PM, Jussi Kalliokoski <
jussi.kalliokoski@gmail.com> wrote:

> Hello group,
>
> I've been thinking about how to use the Web Audio API to write a
> full-fledged DAW with sequencing capabilities (e.g. MIDI), and I thought
> I'd share some thoughts and questions with you.
>
> Currently, it's pretty straight-forward to use the Web Audio API to
> schedule events in real time, which means it would play quite well together
> with other real time APIs, such as the Web MIDI API. For example, you can
> just schedule an audiobuffer to play whenever a noteon event is received
> from a MIDI source.
>
> However, here's something of a simple idea of how to build a DAW with a
> plugin architecture using the Web Audio API:
>
>  * You have tracks, which may contain audio and sequencing data (e.g.
> MIDI, OSC and/or user-defined envelopes). All of these inputs can be either
> being recorded from an external source, or be static pieces.
>
>  * You have an effects list for each track, effects being available to
> pick from plugins.
>
>  * You have plugins. The plugins are given references to two gain nodes,
> one for input and one for output, as well as a reference to the
> AudioContext. In response, they will give AudioParam references back to the
> host, as well as some information of what the AudioParams stand for,
> min/max values and so on. The plugin will set up a sub-graph between the
> given gain nodes.
>
> This would be a very basic setup, but with the current API design there
> are some hard problems to solve here. The audio is relatively easy,
> regardless of whether it's coming from an external source or not. It's just
> a source node of some sort. The sequencing part is where stuff gets tricky.
>
> In the plugin models I've used, the sequencing data is paired with the
> audio data in processing events, i.e. you're told to fill some buffers,
> given a few k-rate params, a few a-rate params and some sequencing events
> as well as the input audio data. This makes it very simple to synchronize
> the sequencing events with the audio. But with the Web Audio API, the only
> place where you get a processing event like this is the JS node, and even
> there you currently only get the input audio.
>
> What would be the proposed solution for handling this case? And please, no
> setTimeout(). A system is as weak as its weakest link and building a
> DAW/Sequencer that relies on setTimeout is going to be utterly unreliable,
> which a DAW can't afford to be.
>
> Cheers,
> Jussi
>

Received on Tuesday, 21 August 2012 18:31:10 UTC