W3C home > Mailing lists > Public > public-audio@w3.org > October to December 2012

Re: Web Audio API sequencer capabilities

From: Joe Berkovitz <joe@noteflight.com>
Date: Fri, 5 Oct 2012 18:57:03 -0400
Message-ID: <CA+ojG-ZUNLf_QrZSU4UPmi5L4oQX8HzBez_oJJ+QGUffSq=bJw@mail.gmail.com>
To: Jussi Kalliokoski <jussi.kalliokoski@gmail.com>
Cc: Srikumar Karaikudi Subramanian <srikumarks@gmail.com>, Chris Rogers <crogers@google.com>, "public-audio@w3.org Group" <public-audio@w3.org>
Actually, I didn't ever think a GainNode would generate its own signal.
Rather, it did not occur to me to drive a set of AudioParams with an
envelope via the audio rate modulation feature, using a gain controlled
unity signal. It is this last idea that seems a bit tricky and unclear for
API novices. If there were something like a UnitySourceNode, I would feel
better.

I'm interested in Chris Rs take since I'm sure he has given this some
thought.
On Oct 5, 2012 6:35 PM, "Jussi Kalliokoski" <jussi.kalliokoski@gmail.com>
wrote:

>
>
>> Srikumar's approach -- I assume it actually works! -- seems as though it
>> will handle this case, but it doesn't feel like an obvious solution that
>> API consumers would quickly find their way to. Is this the recommended
>> approach to this use case?
>>
>
> I assume you assumed what I assumed initially from Srikumar's post, that a
> GainNode would act as a generator node if it doesn't have any inputs, but
> this doesn't seem to be the case, so I think Srikumar was just pointing out
> the fact that you can use the GainNode as a volume envelope.
>
> Cheers,
> Jussi
>
>
>> …Joe
>>
>>
>> On Oct 4, 2012, at 2:46 AM, Jussi Kalliokoski <
>> jussi.kalliokoski@gmail.com> wrote:
>>
>> I think my intent is misinterpreted here, I know that you can do even
>> complex automation envelopes with the AudioParam, and its scheduling
>> capabilities are quite sufficient. What I'm saying is that the automation
>> behavior belongs to a separate node, and the AudioParam should be
>> simplified to have no scheduling capabilities, and if you want to schedule
>> its values you pipe the output of an envelope node into it.
>>
>> I talked a lot about the benefits and use cases of this in my initial
>> post about the subject [1], but one thing is a DAW plugin architecture,
>> where the plugin exposes the AudioParams necessary to control it. Imagine
>> we have a DAW that lets you draw envelopes for parameters, and a simple
>> oscillator plugin, that gives out just a detune parameter and uses the
>> built-in Oscillator. As Oscillators are single-shot, you can't give out the
>> detune AudioParam because it would just apply for one Oscillator. Hence it
>> probably makes sense to instead expose GainNodes that pipe their output to
>> the the respective AudioParams, but with that the enveloping possibilities
>> are lost. However, if there was a separate EnvelopeNode that handled the
>> value scheduling, the host could just pipe that to the GainNode that was
>> provided. It would also leave the plugin free to have its own value
>> scheduling for the detune if it has some tricks like frequency glide, and
>> those separately controlled automations would get mixed, and didn't have to
>> know about each other.
>>
>> Cheers,
>> Jussi
>>
>> [1] http://lists.w3.org/Archives/Public/public-audio/2012JulSep/0614.html
>>
>> On Thu, Oct 4, 2012 at 5:23 AM, Srikumar Karaikudi Subramanian <
>> srikumarks@gmail.com> wrote:
>>
>>> A gain node's gain parameter effectively serves as an envelope node if
>>> you feed a unity signal to the gain node. This has gotten really expressive
>>> particularly after connect() began supporting AudioParams as targets. Do
>>> you have a use case in mind that cannot be covered by such a gain node that
>>> would be covered by an envelope node?
>>>
>>>  -Kumar
>>>
>>> On 4 Oct, 2012, at 1:58 AM, Jussi Kalliokoski <
>>> jussi.kalliokoski@gmail.com> wrote:
>>>
>>> Let me be more specific, do you think the envelope functionality being
>>> in the AudioParam is more powerful than if it were in a separate node? If
>>> you do, why? What is the advantage it offers?
>>>
>>> Cheers,
>>> Jussi
>>>
>>> On Wed, Oct 3, 2012 at 8:40 PM, Chris Rogers <crogers@google.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Oct 3, 2012 at 12:59 AM, Jussi Kalliokoski <
>>>> jussi.kalliokoski@gmail.com> wrote:
>>>>
>>>>> Chris (Rogers), could I get your opinion regarding the introducing an
>>>>> envelope node and simplifying the AudioParam?
>>>>>
>>>>
>>>> AudioParam has been designed with lots of care and thought for
>>>> implementing envelopes, so I believe it's in a very good spot right now.
>>>>  As an example of how people are using these envelope capabilities in
>>>> sequencer applications, here's a good example from Patrick Borgeat:
>>>> https://dl.dropbox.com/u/15744891/www1002/macro_seq_test1002.html
>>>>
>>>> Chris
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Cheers,
>>>>> Jussi
>>>>>
>>>>>
>>>>> On Sat, Aug 25, 2012 at 8:19 AM, Srikumar Karaikudi Subramanian <
>>>>> srikumarks@gmail.com> wrote:
>>>>>
>>>>>> > This would be a very basic setup, but with the current API design
>>>>>> there are some hard problems to solve here. The audio is relatively easy,
>>>>>> regardless of whether it's coming from an external source or not. It's just
>>>>>> a source node of some sort. The sequencing part is where stuff gets tricky.
>>>>>>
>>>>>> Yes it does appear tricky, but given that scheduling with native
>>>>>> nodes suffices mostly, it seems to me that the ability to schedule JS audio
>>>>>> nodes using noteOn/noteOff (renamed now as start/stop), together with
>>>>>> dynamic lifetime support solves the scheduling problems completely. Such
>>>>>> scheduling facility need only be present for JS nodes that have no inputs -
>>>>>> i.e. are source nodes.
>>>>>>
>>>>>> We (at anclab) were thinking about similar scheduling issues within
>>>>>> the context of building compose-able "sound models" using the Web Audio
>>>>>> API. A prototype framework for this purpose that we built (
>>>>>> http://github.com/srikumarks/steller) will generalize if JS nodes
>>>>>> can be scheduled similar to buffer source nodes and oscillators. A bare
>>>>>> bones example of using the framework is available here -
>>>>>> http://srikumarks.github.com/steller .
>>>>>>
>>>>>> "Steller" is intended for interactive high level sound/music models
>>>>>> (think foot steps, ambient music generators and the like) and so doesn't
>>>>>> have time structures that are editable or even a "play position" as a DAW
>>>>>> would require, but it may be possible to build them atop/beside Steller. At
>>>>>> the least, it suggests the sufficiency of the current scheduling API for
>>>>>> native nodes.
>>>>>>
>>>>>> Best,
>>>>>> -Kumar
>>>>>>
>>>>>> On 21 Aug, 2012, at 11:28 PM, Jussi Kalliokoski <
>>>>>> jussi.kalliokoski@gmail.com> wrote:
>>>>>>
>>>>>> > Hello group,
>>>>>> >
>>>>>> > I've been thinking about how to use the Web Audio API to write a
>>>>>> full-fledged DAW with sequencing capabilities (e.g. MIDI), and I thought
>>>>>> I'd share some thoughts and questions with you.
>>>>>> >
>>>>>> > Currently, it's pretty straight-forward to use the Web Audio API to
>>>>>> schedule events in real time, which means it would play quite well together
>>>>>> with other real time APIs, such as the Web MIDI API. For example, you can
>>>>>> just schedule an audiobuffer to play whenever a noteon event is received
>>>>>> from a MIDI source.
>>>>>> >
>>>>>> > However, here's something of a simple idea of how to build a DAW
>>>>>> with a plugin architecture using the Web Audio API:
>>>>>> >
>>>>>> >  * You have tracks, which may contain audio and sequencing data
>>>>>> (e.g. MIDI, OSC and/or user-defined envelopes). All of these inputs can be
>>>>>> either being recorded from an external source, or be static pieces.
>>>>>> >
>>>>>> >  * You have an effects list for each track, effects being available
>>>>>> to pick from plugins.
>>>>>> >
>>>>>> >  * You have plugins. The plugins are given references to two gain
>>>>>> nodes, one for input and one for output, as well as a reference to the
>>>>>> AudioContext. In response, they will give AudioParam references back to the
>>>>>> host, as well as some information of what the AudioParams stand for,
>>>>>> min/max values and so on. The plugin will set up a sub-graph between the
>>>>>> given gain nodes.
>>>>>> >
>>>>>> > This would be a very basic setup, but with the current API design
>>>>>> there are some hard problems to solve here. The audio is relatively easy,
>>>>>> regardless of whether it's coming from an external source or not. It's just
>>>>>> a source node of some sort. The sequencing part is where stuff gets tricky.
>>>>>> >
>>>>>> > In the plugin models I've used, the sequencing data is paired with
>>>>>> the audio data in processing events, i.e. you're told to fill some buffers,
>>>>>> given a few k-rate params, a few a-rate params and some sequencing events
>>>>>> as well as the input audio data. This makes it very simple to synchronize
>>>>>> the sequencing events with the audio. But with the Web Audio API, the only
>>>>>> place where you get a processing event like this is the JS node, and even
>>>>>> there you currently only get the input audio.
>>>>>> >
>>>>>> > What would be the proposed solution for handling this case? And
>>>>>> please, no setTimeout(). A system is as weak as its weakest link and
>>>>>> building a DAW/Sequencer that relies on setTimeout is going to be utterly
>>>>>> unreliable, which a DAW can't afford to be.
>>>>>> >
>>>>>> > Cheers,
>>>>>> > Jussi
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>
>>        ... .  .    .       Joe
>>
>> *Joe Berkovitz*
>> President
>>
>> *Noteflight LLC*
>> Boston, Mass.
>> phone: +1 978 314 6271
>>        www.noteflight.com
>>
>>
>
Received on Friday, 5 October 2012 22:57:31 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:03 UTC