W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2012

Re: Thoughts and questions on the API from a modular synth point of view

From: Chris Wilson <cwilso@google.com>
Date: Mon, 16 Jul 2012 09:45:11 -0700
Message-ID: <CAJK2wqVgbqfeqvjMXDUieLsoU7zPN8qo_Np9XrPr3RtRdeJtUA@mail.gmail.com>
To: Peter van der Noord <peterdunord@gmail.com>
Cc: "public-audio@w3.org" <public-audio@w3.org>
A couple of responses inline.  Chris Rogers will probably want to comment
on some of these, but he's on vacation right now.

On Sat, Jul 14, 2012 at 8:50 PM, Peter van der Noord

> I'm currently working on a graphical interface for the Web Audio API,
> allowing the user to easily create graphs (and easily save/load them). I've
> made a modular synthesizer in Flash (www.patchwork-synth.com) before this
> and it seemed like a challenging idea to do this in js.
> I have some questions and thoughts about the draft i read on w3.
> - Is it possible to check if an AudioParam runs at a-rate or k-rate?

I presume you mean "live", i.e. a programming test.  No - but they are
detailed that way in the spec.

> - Is it possible to check how many channels a certain output has? It seems
> that a signal can have any amount of channels, is there a way to find how
> many a certain output generates?

Connect an AudioSplitterNode and see how many outputs it gets assigned.  Do
you have a scenario where this is interesting?

> - Can the number of inputs/outputs on a ChannelMixer/ChannelMerger be
> changed on the fly?


> - Is this the final design for how disconnecting will work? It seems a bit
> impractical to only be able to disconnect a whole output. When it has
> multiple connections, they are all gone (i assume that's how it works)

There's another thread about this; I'll respond on that thread.

> - Will it be possible in the future to retrieve the full graph from the
> audiocontext? It looks like it doesnt give any info on it.

Keeping in mind that parts of the graph may be conveniently
garbage-collected, I'm not sure it would be a good idea to do this naively.
 Did you have a particular scenario in mind?

> - Doesn't the oscillator have the option to play from a buffer? It seems
> to have a wavetable, but i have no idea what that is (contains IFFR data?).

No.  Oscillator is specifically for

Note there are two usages of the term "Wavetable".  One of them - the one
referenced by the current Web Audio specification - is wavetable synthesis (
http://en.wikipedia.org/wiki/Wavetable_synthesis) - an additive synthesis
technique that lets you specify coefficients on a (potentially large)
number of harmonic overtones to a fundamental frequency and waveform.  Said
a little differently:  you specify how much of each overtone in Fourier
series gets mixed together to create the single cycle waveform.

Unfortunately, this term got confused with a single-cycle-table-lookup
sample playback in the early nineties.  Although wavetable synthesis will
frequently pre-compute the waveform from the harmonic series coefficients
and play back a looped single cycle sample, that's not the point.

If you want to have an oscillator that plays from a buffer, a la a
single-cycle sample playback engine, it's quite easy - just create a
buffer, put a single cycle in it, and use an AudioBufferSourceNode to loop
that cycle.

- Is there a way to retrieve the params of a module?
> - is there a way to retrieve more info about an audioparam? (name,
> description)
> - It looks like the JavaScriptAudioNode has an unchangable number of
> inputs and output. This freaks me out :) Is it going to stay this way?
> - The ability to create ramps on an audioParam seems nice, but i have the
> feeling it's in the wrong place. Wouldn't that ability be better suited in
> a separate module, and triggerable by a GATE_ON signal  (a concept idea i
> describe below). Give it a curve type setting (LIN, EXP), audioparams for
> the time, and you'd be able to control nodes/params in a much more flexible
> way.
> - Can't i add Audioparams on a javascript audio node?

I'll leave these questions for someone else.

> - Can't i pause/resume the whole system?

No.  Also see other thread in a couple minutes, when I respond there.  :)

> And a final thing: would it be an idea to replace the calling of certain
> methods on certain audioobjects with a few logic-based signal
> interpretations? For instance, let's say a signal that:
> - crosses though 0 from negative to positive is called GATE_ON
> - a signal that goes through 0 from pos to neg is called GATE_OFF
> - a signal >0 is GATE_HIGH
> - a signal <=0 is GATE_LOW
> This way, you can have audioparameters that respond to those
> states/events. For example:
> - the AudioBufferSourceNode can have a 'control' parameter that starts the
> signal whenever it gets a GATE_ON, instead of a noteOn() command.
> - the oscillator can play if it gets a GATE_HIGH, instead of (again) the
> noteOn() command.
> - you can start envelopes on GATE_HIGH evens
> This gives you *a lot* more flexibility and fun towards triggering certain
> actions, and allows you to create nice sequencers. I really don't see how
> to implement the calling of methods to start something into a
> continuous-signal based graph.

Can you detail for me why you'd be more interested in driving such gates
from an a-rate or k-rate style parameter?  I can understand what you're
asking for and how it would work, I'm just trying to think of when I'd want
to do it that way.

Received on Monday, 16 July 2012 16:45:46 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:50:01 UTC