Re: Thoughts and questions on the API from a modular synth point of view

2012/7/16 Chris Wilson <cwilso@google.com>

> A couple of responses inline.  Chris Rogers will probably want to comment
> on some of these, but he's on vacation right now.
>
> On Sat, Jul 14, 2012 at 8:50 PM, Peter van der Noord <
> peterdunord@gmail.com> wrote:
>
>> I'm currently working on a graphical interface for the Web Audio API,
>> allowing the user to easily create graphs (and easily save/load them). I've
>> made a modular synthesizer in Flash (www.patchwork-synth.com) before
>> this and it seemed like a challenging idea to do this in js.
>>
>> I have some questions and thoughts about the draft i read on w3.
>>
>> - Is it possible to check if an AudioParam runs at a-rate or k-rate?
>>
>
> I presume you mean "live", i.e. a programming test.  No - but they are
> detailed that way in the spec.
>
>
 I'm creating a modular editor for the api, and it would be nice if all the
objects (inputs, outputs, params) could tell everything  about themselves
so the editor can supply that information. It is indeed written in de
specs, and i already have a wrapper containing some info about the modules,
but if would have to maintain all that info as well and keep it up to
date..... when anything changes in the api, or when different browsers
differ on certain properties, that becomes undoable.



> - Is it possible to check how many channels a certain output has? It seems
>> that a signal can have any amount of channels, is there a way to find how
>> many a certain output generates?
>>
>
> Connect an AudioSplitterNode and see how many outputs it gets assigned.
>  Do you have a scenario where this is interesting?
>

In my case, feedback for the user (what i described above applies here as
well). If there's a module he can use it's important information what the
output exactly sends out. I'm aware that i'm primarily focussed on my
user-case (building a modular synth) but in this case it seems that the
number of channels that an output generates is important for a programmer
as well.


>
>
>> - Can the number of inputs/outputs on a ChannelMixer/ChannelMerger be
>> changed on the fly?
>>
>
> No
>

Ok. Not a big issue.


> - Is this the final design for how disconnecting will work? It seems a bit
>> impractical to only be able to disconnect a whole output. When it has
>> multiple connections, they are all gone (i assume that's how it works)
>>
>
> There's another thread about this; I'll respond on that thread.
>

Ok.


>
>
- Will it be possible in the future to retrieve the full graph from the
>> audiocontext? It looks like it doesnt give any info on it.
>>
>
> Keeping in mind that parts of the graph may be conveniently
> garbage-collected, I'm not sure it would be a good idea to do this naively.
>  Did you have a particular scenario in mind?
>

It would help a lot with debugging, but if the disconnect options change
for the better, i can live without it. Still, it would be handy (and
logical?) to have such an option. And about the garbage collection.... if
you'd be able to remove a module from the context by a direct method (which
seems like a good addition), it would be immediately gone. But this is no
big thing either.


>
>
>> - Doesn't the oscillator have the option to play from a buffer? It seems
>> to have a wavetable, but i have no idea what that is (contains IFFR data?).
>>
>
> No.  Oscillator is specifically for
>
> Note there are two usages of the term "Wavetable".  One of them - the one
> referenced by the current Web Audio specification - is wavetable synthesis (
> http://en.wikipedia.org/wiki/Wavetable_synthesis) - an additive synthesis
> technique that lets you specify coefficients on a (potentially large)
> number of harmonic overtones to a fundamental frequency and waveform.  Said
> a little differently:  you specify how much of each overtone in Fourier
> series gets mixed together to create the single cycle waveform.
>
> Unfortunately, this term got confused with a single-cycle-table-lookup
> sample playback in the early nineties.  Although wavetable synthesis will
> frequently pre-compute the waveform from the harmonic series coefficients
> and play back a looped single cycle sample, that's not the point.
>
> If you want to have an oscillator that plays from a buffer, a la a
> single-cycle sample playback engine, it's quite easy - just create a
> buffer, put a single cycle in it, and use an AudioBufferSourceNode to loop
> that cycle.
>

Ah, thanks for clearing that up.


>
> - Is there a way to retrieve the params of a module?
>> - is there a way to retrieve more info about an audioparam? (name,
>> description)
>> - It looks like the JavaScriptAudioNode has an unchangable number of
>> inputs and output. This freaks me out :) Is it going to stay this way?
>> - The ability to create ramps on an audioParam seems nice, but i have the
>> feeling it's in the wrong place. Wouldn't that ability be better suited in
>> a separate module, and triggerable by a GATE_ON signal  (a concept idea i
>> describe below). Give it a curve type setting (LIN, EXP), audioparams for
>> the time, and you'd be able to control nodes/params in a much more flexible
>> way.
>> - Can't i add Audioparams on a javascript audio node?
>>
>
> I'll leave these questions for someone else.
>

Ok, although i noticed i overlooked some things in the api. The first three
of those are covered, i read that they all possible. That just leaves the
last two.


>
>> - Can't i pause/resume the whole system?
>>
>
> No.  Also see other thread in a couple minutes, when I respond there.  :)
>
>
Ok.



> And a final thing: would it be an idea to replace the calling of certain
>> methods on certain audioobjects with a few logic-based signal
>> interpretations? For instance, let's say a signal that:
>> - crosses though 0 from negative to positive is called GATE_ON
>> - a signal that goes through 0 from pos to neg is called GATE_OFF
>> - a signal >0 is GATE_HIGH
>> - a signal <=0 is GATE_LOW
>>
>> This way, you can have audioparameters that respond to those
>> states/events. For example:
>> - the AudioBufferSourceNode can have a 'control' parameter that starts
>> the signal whenever it gets a GATE_ON, instead of a noteOn() command.
>> - the oscillator can play if it gets a GATE_HIGH, instead of (again) the
>> noteOn() command.
>> - you can start envelopes on GATE_HIGH evens
>>
>> This gives you *a lot* more flexibility and fun towards triggering
>> certain actions, and allows you to create nice sequencers. I really don't
>> see how to implement the calling of methods to start something into a
>> continuous-signal based graph.
>>
>
> Can you detail for me why you'd be more interested in driving such gates
> from an a-rate or k-rate style parameter?  I can understand what you're
> asking for and how it would work, I'm just trying to think of when I'd want
> to do it that way.
>
>
> To me this is a *big* issue, but again this is seen from my modular synth
perspective (yes i love them: https://dl.dropbox.com/u/250155/rene/foto.JPG).
Without it, i don't see a way to implement modular sequencers. If you look
at the startup-patch on my flash sequencer (www.patchwork-synth.com),
sequencers can do a lot of different cool stuff: from triggering samplers,
to increase steps on other sequencers or random note generation, it just
depends on what's connected to it. With the current implementation, a
sequencer (as a module) that sends out a trigger would have to traverse
down the graph to see what kind of stuff is connected to it, look up what
kind of commands/methods it accepts, decide what command to use and
schedule that somewhere in the future (depending on where in the buffer it
is) - i don't think that's even possible, but apart from that: it's not
something that a sequencer should do - he should just send out trigger
signals, without knowing or caring what happens outside of hiw own scope.
With some uniform 'commands' (gate_on, gate_off, gate_high, gate_low) you
have some universal 'methods' to trigger different events.

A separate kind of audioparam that can react to the 4 states i mentioned
would add a huge deal of flexibility and allow very complicated types of
algoritmic composition and a lot more musical applications. (By the way, i
used the word 'instead' but that wouldn't be handy. such an audioparam in
addition to controlling a node by methods would be best).


Peter

Received on Monday, 16 July 2012 17:37:30 UTC