Re: Thoughts and questions on the API from a modular synth point of view

Hi Peter, thanks for the questions!

On Sat, Jul 14, 2012 at 8:50 PM, Peter van der Noord
<peterdunord@gmail.com>wrote:

> I'm currently working on a graphical interface for the Web Audio API,
> allowing the user to easily create graphs (and easily save/load them). I've
> made a modular synthesizer in Flash (www.patchwork-synth.com) before this
> and it seemed like a challenging idea to do this in js.
>
> I have some questions and thoughts about the draft i read on w3.
>
> - Is it possible to check if an AudioParam runs at a-rate or k-rate?
>

Currently, each parameter is defined in the spec to be either a-rate or
k-rate.  Although this is not exposed as an attribute of AudioParam, I
think you should be able to keep track of these values in your own client
code.


> - Is it possible to check how many channels a certain output has? It seems
> that a signal can have any amount of channels, is there a way to find how
> many a certain output generates?
>

That's a good question.  Currently there is no such API, although we could
consider adding one.  One thing to consider here is that the number of
channels of an output is not necessarily fixed and can be changing
dynamically (and potentially very often and quickly) if there are an ever
changing set of AudioBufferSourceNodes connected to (for example) an
AudioGainNode.  Some of these AudioBufferSourceNodes may be mono, others
stereo, perhaps others are 5.1.  Depending on which combination
of AudioBufferSourceNodes are playing at any given moment,
an AudioGainNode's output will have a different number of channels.


> - Can the number of inputs/outputs on a ChannelMixer/ChannelMerger be
> changed on the fly?
>

They can't.  But you should keep in mind that it's not necessary to "use"
all of the inputs or outputs at any given time.  So, for example, in an
AudioChannelMerger:
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#AudioChannelMerger

notice that in the first example there are 4 inactive/unused inputs.  In
general, you could create an AudioChannelMerger with a large number of
inputs, many of which are unused at certain times, but other times
connections made.  In other words, it's not necessary to connect an input
just because it's there.

Thus, there's no reason to change the number of inputs/outputs since you
can just connect what you need or want at any time.


> - Is this the final design for how disconnecting will work? It seems a bit
> impractical to only be able to disconnect a whole output. When it has
> multiple connections, they are all gone (i assume that's how it works)
>

We have a separate issue for improving the disconnect() API to be more
general:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17793


> - Will it be possible in the future to retrieve the full graph from the
> audiocontext? It looks like it doesnt give any info on it.
>

I can't say for sure what there would be in the future.  But in the
meantime, you can certainly track the state of connections in your own
client code.


> - Doesn't the oscillator have the option to play from a buffer? It seems
> to have a wavetable, but i have no idea what that is (contains IFFR data?).
>

The WaveTable represents "periodic" waveforms for use with Oscillator.  For
arbitrary buffer playback, the AudioBufferSourceNode is what you want:
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#AudioBufferSourceNode


> - Is there a way to retrieve the params of a module?
>

No, but since they're hard-coded/fixed in the spec.  You can already know
what they are from your client code.


> - is there a way to retrieve more info about an audioparam? (name,
> description)
>

No, but you should be able to track "meta-data" about a parameter in your
own client code.


> - It looks like the JavaScriptAudioNode has an unchangable number of
> inputs and output. This freaks me out :) Is it going to stay this way?
>

It would be good to support variable numbers or inputs/outputs.  We have an
issue about this:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17533



> - The ability to create ramps on an audioParam seems nice, but i have the
> feeling it's in the wrong place. Wouldn't that ability be better suited in
> a separate module, and triggerable by a GATE_ON signal  (a concept idea i
> describe below). Give it a curve type setting (LIN, EXP), audioparams for
> the time, and you'd be able to control nodes/params in a much more flexible
> way.
>

I believe the current AudioParam design is quite flexible and I hope to
create more examples showing how to effectively use them in a variety of
use cases.


> - Can't i add Audioparams on a javascript audio node?
>

Not currently, but we are tracking this:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17388


> - Can't i pause/resume the whole system?
>

Not through a specific API call named pause(), but you can effectively
achieve pause/resume behavior by understanding how to leverage the API
overall, how to start and stop sources, and control gain at various control
points.  ToneCraft is a good example of global pause/resume (using the
space bar in this case):
http://labs.dinahmoe.com/ToneCraft/



>
> And a final thing: would it be an idea to replace the calling of certain
> methods on certain audioobjects with a few logic-based signal
> interpretations? For instance, let's say a signal that:
> - crosses though 0 from negative to positive is called GATE_ON
> - a signal that goes through 0 from pos to neg is called GATE_OFF
> - a signal >0 is GATE_HIGH
> - a signal <=0 is GATE_LOW
>
> This way, you can have audioparameters that respond to those
> states/events. For example:
> - the AudioBufferSourceNode can have a 'control' parameter that starts the
> signal whenever it gets a GATE_ON, instead of a noteOn() command.
> - the oscillator can play if it gets a GATE_HIGH, instead of (again) the
> noteOn() command.
> - you can start envelopes on GATE_HIGH evens
>
> This gives you *a lot* more flexibility and fun towards triggering certain
> actions, and allows you to create nice sequencers. I really don't see how
> to implement the calling of methods to start something into a
> continuous-signal based graph.
>

Although I really appreciate the early analog synth notion of analog "gate"
signals, I'm not sure I agree that it's *a lot* more flexible than the
current design which allow arbitrary sample-accurate scheduling of audio
sources and AudioParam changes.  After all, most modern electronic music
software doesn't use the "gate" approach and uses other scheduling
techniques.  The current design allows for an enormous range of sequencer
applications, but, if you want gates, then you can certainly analyse an
audio signal at any point in the graph with a JavaScriptAudioNode (for
example by looking at zero-crossings or whatever you want) and then
schedule events based on that information.  So, in other words, you can
build an application which exposes the concept of "gate" at the UI
application level and present it to the user using that metaphor.


>
>
> Regards,
> Peter van der Noord
> www.petervandernoord.nl/blog
> www.patchwork-synth.com
>

Peter, thanks again for your comments/questions - they're good ones!
Cheers,
Chris

Received on Tuesday, 31 July 2012 01:05:56 UTC