Re: Web Audio API sequencer capabilities

This is the *hit, tough but a great direction to work toward.

Using Java I had created a player engine that was a composite iterator over a set of parts/tracks. The timing information is derived from and is inherent in the object structure of the parts and their contained time chucked objects. As the iterator steps through parts using the timing information to sync the playback it is then possible to create MIDI messages or alternatively just trigger oscillators or audio buffers based on the inner information retrieved from each chunk by the stepping iterator.

Since a graph of audio nodes can get complex and be quite unique it would also be great if there was a JSON configuration format for audio graphs. I can see a configuration then being associated/mapped with a given part or parts, which would deserialize the configuration into the actual graph. The datums in the part chunks would then be sent to a given mapped audio graph. These datums can be anything including time series data, a la envelopes. Anything goes, as long as there is a decent composite iterator and a timing information mesh that aligns/schedules events produced from datums in time.

I'm still thinking more about the configuration, it could also be generated based on specific datums being present in a given part. It such a way the configuration of the graph is entirely produced by datums/directives in part/s themselves. Since the part is a model it could be observed and changes could update it associated audio graph.

Awesome,

Thom


On 2012-08-21, at 12:58 PM, Jussi Kalliokoski wrote:

> Hello group,
> 
> I've been thinking about how to use the Web Audio API to write a full-fledged DAW with sequencing capabilities (e.g. MIDI), and I thought I'd share some thoughts and questions with you.
> 
> Currently, it's pretty straight-forward to use the Web Audio API to schedule events in real time, which means it would play quite well together with other real time APIs, such as the Web MIDI API. For example, you can just schedule an audiobuffer to play whenever a noteon event is received from a MIDI source.
> 
> However, here's something of a simple idea of how to build a DAW with a plugin architecture using the Web Audio API:
> 
>  * You have tracks, which may contain audio and sequencing data (e.g. MIDI, OSC and/or user-defined envelopes). All of these inputs can be either being recorded from an external source, or be static pieces.
> 
>  * You have an effects list for each track, effects being available to pick from plugins.
> 
>  * You have plugins. The plugins are given references to two gain nodes, one for input and one for output, as well as a reference to the AudioContext. In response, they will give AudioParam references back to the host, as well as some information of what the AudioParams stand for, min/max values and so on. The plugin will set up a sub-graph between the given gain nodes.
> 
> This would be a very basic setup, but with the current API design there are some hard problems to solve here. The audio is relatively easy, regardless of whether it's coming from an external source or not. It's just a source node of some sort. The sequencing part is where stuff gets tricky.
> 
> In the plugin models I've used, the sequencing data is paired with the audio data in processing events, i.e. you're told to fill some buffers, given a few k-rate params, a few a-rate params and some sequencing events as well as the input audio data. This makes it very simple to synchronize the sequencing events with the audio. But with the Web Audio API, the only place where you get a processing event like this is the JS node, and even there you currently only get the input audio.
> 
> What would be the proposed solution for handling this case? And please, no setTimeout(). A system is as weak as its weakest link and building a DAW/Sequencer that relies on setTimeout is going to be utterly unreliable, which a DAW can't afford to be.
> 
> Cheers,
> Jussi

Received on Wednesday, 22 August 2012 13:42:18 UTC