W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Thoughts and questions on the API from a modular synth point of view

From: Peter van der Noord <peterdunord@gmail.com>
Date: Thu, 2 Aug 2012 12:33:17 +0200
Message-ID: <CAL9tNz-at6SK_U1vMPnHZ_ehmxRr77nG5VVd8S62PX91-cY8bA@mail.gmail.com>
To: Chris Rogers <crogers@google.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
>> - Can the number of inputs/outputs on a ChannelMixer/ChannelMerger be
>> changed on the fly?
> They can't.  But you should keep in mind that it's not necessary to "use"
> all of the inputs or outputs at any given time.  So, for example, in an
> AudioChannelMerger:
> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#AudioChannelMerger
> notice that in the first example there are 4 inactive/unused inputs.  In
> general, you could create an AudioChannelMerger with a large number of
> inputs, many of which are unused at certain times, but other times
> connections made.  In other words, it's not necessary to connect an input
> just because it's there.
> Thus, there's no reason to change the number of inputs/outputs since you
> can just connect what you need or want at any time.

I'm currently using the mergers/splitters to be able to have multiple in
and outputs, but found the rearranging that the merger does not that
helpful thb. I have to add gain modules to each of the merger's inputs to
get the module to do what i want: http://i.imgur.com/MLtnQ.png

But hopefully this is temprorary, until i can create nodes with multiple

>> - Is this the final design for how disconnecting will work? It seems a
>> bit impractical to only be able to disconnect a whole output. When it has
>> multiple connections, they are all gone (i assume that's how it works)
> We have a separate issue for improving the disconnect() API to be more
> general:
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=17793
Ah, yes. There are indeed some cases that currently cannot be disconnected.
But apart from that, i'd really like functionality to simply disconnect a
module fully.

I don't see how this:

nodeA.disconnect(nodeToDisconnect, 0);
nodeB.disconnect(nodeToDisconnect, 1, 1);
nodeC.disconnect(nodeToDisconnect, 2);
nodeD.disconnect(nodeToDisconnect, 3, 1);
nodeE.disconnect(nodeToDisconnectsAudioParam1);   // disconnecting from
audioparams is not possible yet btw
nodeToDisconnect.disconnect(nodeG, 2, 1);
nodeToDisconnect.disconnect(nodeG, 2, 2);
nodeToDisconnect.disconnect(nodeH, 2, 2);

can be preferred to:


> It would be good to support variable numbers or inputs/outputs.  We have
> an issue about this:
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=17533

I'm anxiously waiting for this to happen.

> - Can't i pause/resume the whole system?
> Not through a specific API call named pause(), but you can effectively
> achieve pause/resume behavior by understanding how to leverage the API
> overall, how to start and stop sources, and control gain at various control
> points.  ToneCraft is a good example of global pause/resume (using the
> space bar in this case):
> http://labs.dinahmoe.com/ToneCraft/
Hmm, what ToneCraft does there isn't the pausing i meant. The sequencer
stops, but not the sound itself. I'd really like the possibility to stop
the audiocontext from asking new buffers.

>> And a final thing: would it be an idea to replace the calling of certain
>> methods on certain audioobjects with a few logic-based signal
>> interpretations? For instance, let's say a signal that:
>> - crosses though 0 from negative to positive is called GATE_ON
>> - a signal that goes through 0 from pos to neg is called GATE_OFF
>> - a signal >0 is GATE_HIGH
>> - a signal <=0 is GATE_LOW
>> This way, you can have audioparameters that respond to those
>> states/events. For example:
>> - the AudioBufferSourceNode can have a 'control' parameter that starts
>> the signal whenever it gets a GATE_ON, instead of a noteOn() command.
>> - the oscillator can play if it gets a GATE_HIGH, instead of (again) the
>> noteOn() command.
>> - you can start envelopes on GATE_HIGH evens
>> This gives you *a lot* more flexibility and fun towards triggering
>> certain actions, and allows you to create nice sequencers. I really don't
>> see how to implement the calling of methods to start something into a
>> continuous-signal based graph.
> Although I really appreciate the early analog synth notion of analog
> "gate" signals, I'm not sure I agree that it's *a lot* more flexible than
> the current design which allow arbitrary sample-accurate scheduling of
> audio sources and AudioParam changes.  After all, most modern electronic
> music software doesn't use the "gate" approach and uses other scheduling
> techniques.  The current design allows for an enormous range of sequencer
> applications, but, if you want gates, then you can certainly analyse an
> audio signal at any point in the graph with a JavaScriptAudioNode (for
> example by looking at zero-crossings or whatever you want) and then
> schedule events based on that information.  So, in other words, you can
> build an application which exposes the concept of "gate" at the UI
> application level and present it to the user using that metaphor.

I don't think gate-signals are purely related to early analog synthesizers,
the concept is still used. Pressing and holding a key on your midi-keyboard
(or drawing it in a midi-track on your DAW) is a gate-like signal (ok,
technically a note-on and a note-off), pressing a pedal, any syncing of
delays, lfo's etc can be done through something like that.

But i can see how it won't fit that well in the design, i have indeed
implemented my own audio-rate zero-crossing checks.

Received on Thursday, 2 August 2012 10:33:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:11 UTC