Thoughts and questions on the API from a modular synth point of view

I'm currently working on a graphical interface for the Web Audio API, allowing the user to easily create graphs (and easily save/load them). I've made a modular synthesizer in Flash (www.patchwork-synth.com) before this and it seemed like a challenging idea to do this in js. 

I have some questions and thoughts about the draft i read on w3.

- Is it possible to check if an AudioParam runs at a-rate or k-rate?
- Is it possible to check how many channels a certain output has? It seems that a signal can have any amount of channels, is there a way to find how many a certain output generates?
- Can the number of inputs/outputs on a ChannelMixer/ChannelMerger be changed on the fly?
- Is this the final design for how disconnecting will work? It seems a bit impractical to only be able to disconnect a whole output. When it has multiple connections, they are all gone (i assume that's how it works)
- Will it be possible in the future to retrieve the full graph from the audiocontext? It looks like it doesnt give any info on it.
- Doesn't the oscillator have the option to play from a buffer? It seems to have a wavetable, but i have no idea what that is (contains IFFR data?).
- Is there a way to retrieve the params of a module? 
- is there a way to retrieve more info about an audioparam? (name, description)
- It looks like the JavaScriptAudioNode has an unchangable number of inputs and output. This freaks me out :) Is it going to stay this way?
- The ability to create ramps on an audioParam seems nice, but i have the feeling it's in the wrong place. Wouldn't that ability be better suited in a separate module, and triggerable by a GATE_ON signal  (a concept idea i describe below). Give it a curve type setting (LIN, EXP), audioparams for the time, and you'd be able to control nodes/params in a much more flexible way.
- Can't i add Audioparams on a javascript audio node?
- Can't i pause/resume the whole system?

And a final thing: would it be an idea to replace the calling of certain methods on certain audioobjects with a few logic-based signal interpretations? For instance, let's say a signal that:
- crosses though 0 from negative to positive is called GATE_ON
- a signal that goes through 0 from pos to neg is called GATE_OFF
- a signal >0 is GATE_HIGH 
- a signal <=0 is GATE_LOW

This way, you can have audioparameters that respond to those states/events. For example:
- the AudioBufferSourceNode can have a 'control' parameter that starts the signal whenever it gets a GATE_ON, instead of a noteOn() command.
- the oscillator can play if it gets a GATE_HIGH, instead of (again) the noteOn() command.
- you can start envelopes on GATE_HIGH evens

This gives you *a lot* more flexibility and fun towards triggering certain actions, and allows you to create nice sequencers. I really don't see how to implement the calling of methods to start something into a continuous-signal based graph.


Regards,
Peter van der Noord
www.petervandernoord.nl/blog
www.patchwork-synth.com

Received on Monday, 16 July 2012 05:28:26 UTC