W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Thoughts and questions on the API from a modular synth point of view

From: Chris Wilson <cwilso@google.com>
Date: Thu, 2 Aug 2012 09:41:50 -0700
Message-ID: <CAJK2wqXTGp0hMJDe6ca5t8WiSe1X=WJ36Bzf=JSmBHXgxC65jQ@mail.gmail.com>
To: Peter van der Noord <peterdunord@gmail.com>
Cc: Chris Rogers <crogers@google.com>, "public-audio@w3.org" <public-audio@w3.org>
On Thu, Aug 2, 2012 at 3:33 AM, Peter van der Noord
<peterdunord@gmail.com>wrote:

> - Can't i pause/resume the whole system?
>>>
>>
>> Not through a specific API call named pause(), but you can effectively
>> achieve pause/resume behavior by understanding how to leverage the API
>> overall, how to start and stop sources, and control gain at various control
>> points.  ToneCraft is a good example of global pause/resume (using the
>> space bar in this case):
>> http://labs.dinahmoe.com/ToneCraft/
>>
>>
> Hmm, what ToneCraft does there isn't the pausing i meant. The sequencer
> stops, but not the sound itself. I'd really like the possibility to stop
> the audiocontext from asking new buffers.
>

Chris and I have talked through many different "pausing" behaviors - and
I'm quite sure he's thought about it even more.  The problem is there isn't
one logical solution - for example, your "I want to stop audiocontext from
asking for more buffers [temporarily]": how does that deal with an upstream
source from a microphone input (via getUserMedia)?  Is it really "pausing"
- i.e. recording like a DVR?  How much time can it buffer?  You'd probably
also want a fast-forward API then... and this starts looking less tenable.
 It would also be a confusing challenge when audiocontext.currentTime
doesn't always proceed forward at a steady rate - or (more likely) when
branches of the audio graph want to get out of sync (because you really
want to "pause" branches, not always the whole graph - that's what the
Fieldrunners guys
wanted<http://www.html5rocks.com/en/tutorials/webaudio/fieldrunners/>,
so they could pause the game effects but keep the menu effects going).

You can, of course, implement your own "DVR node" and choose to pause time
- but then it's up to you to make the decisions about how much to buffer,
if you want a "catch up" interface, etc.

This gives you *a lot* more flexibility and fun towards triggering certain
>>> actions, and allows you to create nice sequencers. I really don't see how
>>> to implement the calling of methods to start something into a
>>> continuous-signal based graph.
>>>
>>
>> Although I really appreciate the early analog synth notion of analog
>> "gate" signals, I'm not sure I agree that it's *a lot* more flexible than
>> the current design which allow arbitrary sample-accurate scheduling of
>> audio sources and AudioParam changes.  After all, most modern electronic
>> music software doesn't use the "gate" approach and uses other scheduling
>> techniques.  The current design allows for an enormous range of sequencer
>> applications, but, if you want gates, then you can certainly analyse an
>> audio signal at any point in the graph with a JavaScriptAudioNode (for
>> example by looking at zero-crossings or whatever you want) and then
>> schedule events based on that information.  So, in other words, you can
>> build an application which exposes the concept of "gate" at the UI
>> application level and present it to the user using that metaphor.
>>
>>
>
> I don't think gate-signals are purely related to early analog
> synthesizers, the concept is still used. Pressing and holding a key on your
> midi-keyboard (or drawing it in a midi-track on your DAW) is a gate-like
> signal (ok, technically a note-on and a note-off), pressing a pedal, any
> syncing of delays, lfo's etc can be done through something like that.
>

Pressing and holding a key on a MIDI keyboard isn't the same at all in
terms of input as a CV/gate signal - that's what I was trying to say
before.  I have used control signals in Web Audio graphs (that is, using
the "audio" connection as a control signal, not intended to be "listened
to") - that's what the vocoder does.  For gates, though, an event model is
usually much more efficient.  Syncing of delays is relatively easy by
setting control parameters to the related values (since the whole system
has a single clock).  The hard sync feature we discussed probably would be
an audio-rate control signal, and it's an interesting case as we think
about audioParams on JSNodes.

-C
Received on Thursday, 2 August 2012 16:42:26 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 2 August 2012 16:42:27 GMT