W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Re: Web Audio API sequencer capabilities

From: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Date: Wed, 3 Oct 2012 10:59:04 +0300
Message-ID: <CAJhzemUsX36VOqEqP+qrOMZB9dkj2-5j24_E-w2fMxkyAEpvvA@mail.gmail.com>
To: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>
Cc: public-audio@w3.org
Chris (Rogers), could I get your opinion regarding the introducing an
envelope node and simplifying the AudioParam?

Cheers,
Jussi

On Sat, Aug 25, 2012 at 8:19 AM, Srikumar Karaikudi Subramanian <
srikumarks@gmail.com> wrote:

> > This would be a very basic setup, but with the current API design there
> are some hard problems to solve here. The audio is relatively easy,
> regardless of whether it's coming from an external source or not. It's just
> a source node of some sort. The sequencing part is where stuff gets tricky.
>
> Yes it does appear tricky, but given that scheduling with native nodes
> suffices mostly, it seems to me that the ability to schedule JS audio nodes
> using noteOn/noteOff (renamed now as start/stop), together with dynamic
> lifetime support solves the scheduling problems completely. Such scheduling
> facility need only be present for JS nodes that have no inputs - i.e. are
> source nodes.
>
> We (at anclab) were thinking about similar scheduling issues within the
> context of building compose-able "sound models" using the Web Audio API. A
> prototype framework for this purpose that we built (
> http://github.com/srikumarks/steller) will generalize if JS nodes can be
> scheduled similar to buffer source nodes and oscillators. A bare bones
> example of using the framework is available here -
> http://srikumarks.github.com/steller .
>
> "Steller" is intended for interactive high level sound/music models (think
> foot steps, ambient music generators and the like) and so doesn't have time
> structures that are editable or even a "play position" as a DAW would
> require, but it may be possible to build them atop/beside Steller. At the
> least, it suggests the sufficiency of the current scheduling API for native
> nodes.
>
> Best,
> -Kumar
>
> On 21 Aug, 2012, at 11:28 PM, Jussi Kalliokoski <
> jussi.kalliokoski@gmail.com> wrote:
>
> > Hello group,
> >
> > I've been thinking about how to use the Web Audio API to write a
> full-fledged DAW with sequencing capabilities (e.g. MIDI), and I thought
> I'd share some thoughts and questions with you.
> >
> > Currently, it's pretty straight-forward to use the Web Audio API to
> schedule events in real time, which means it would play quite well together
> with other real time APIs, such as the Web MIDI API. For example, you can
> just schedule an audiobuffer to play whenever a noteon event is received
> from a MIDI source.
> >
> > However, here's something of a simple idea of how to build a DAW with a
> plugin architecture using the Web Audio API:
> >
> >  * You have tracks, which may contain audio and sequencing data (e.g.
> MIDI, OSC and/or user-defined envelopes). All of these inputs can be either
> being recorded from an external source, or be static pieces.
> >
> >  * You have an effects list for each track, effects being available to
> pick from plugins.
> >
> >  * You have plugins. The plugins are given references to two gain nodes,
> one for input and one for output, as well as a reference to the
> AudioContext. In response, they will give AudioParam references back to the
> host, as well as some information of what the AudioParams stand for,
> min/max values and so on. The plugin will set up a sub-graph between the
> given gain nodes.
> >
> > This would be a very basic setup, but with the current API design there
> are some hard problems to solve here. The audio is relatively easy,
> regardless of whether it's coming from an external source or not. It's just
> a source node of some sort. The sequencing part is where stuff gets tricky.
> >
> > In the plugin models I've used, the sequencing data is paired with the
> audio data in processing events, i.e. you're told to fill some buffers,
> given a few k-rate params, a few a-rate params and some sequencing events
> as well as the input audio data. This makes it very simple to synchronize
> the sequencing events with the audio. But with the Web Audio API, the only
> place where you get a processing event like this is the JS node, and even
> there you currently only get the input audio.
> >
> > What would be the proposed solution for handling this case? And please,
> no setTimeout(). A system is as weak as its weakest link and building a
> DAW/Sequencer that relies on setTimeout is going to be utterly unreliable,
> which a DAW can't afford to be.
> >
> > Cheers,
> > Jussi
>
>
Received on Wednesday, 3 October 2012 07:59:32 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:03 UTC