Re: Aiding early implementations of the web audio API

On Tue, May 22, 2012 at 12:27 PM, Colin Clark <colinbdclark@gmail.com>wrote:

> Hi all,
>
> This is a great discussion. A few comments inline:
>
> On Tue, May 22, 2012 at 8:55 PM, Chris Wilson <cwilso@google.com> wrote:
> > The easiest interface would be just be to have an output device stream.
>  However, I think having a basic audio toolbox in the form of node types
> will cause an explosion of audio applications - building the vocoder
> example was illustrative to me, because I ended up using about half of the
> node types, and found them to be fantastically easy to build on.  Frankly,
> if they hadn't been there, I wouldn't have built the vocoder, because it
> would have been too complex for me to take on.  After working through a
> number of other scenarios in my mind, I'm left with the same feeling -
> having this set of node types fulfills most of the needs that I can
> envision, and the few I've thought of that aren't covered, I'm happy to use
> JS nodes for.  The only place where I'm personally not entirely convinced
> is that I think I would personally trade the DynamicsCompressorNode for an
> envelope follower node.  Maybe that's just because I'd rather hack noise
> gates, auto-wah effects, etc., without dropping into JS node.
>
> Chris, I think it's great that you've had such a good experience creating
> cool demos with the building blocks provided by the Web Audio API. There
> are some really great features built right in, and I agree that they're
> quite powerful. I'm looking forward to seeing your vocoder demo!
>
> That said, I think you'll find that as you continue to go deeper into
> synthesis and audio processing, you won't be able to avoid the need for new
> processing units that don't ship with the Web Audio API. For example, if
> you wanted to create a realistic-sounding model of an analog synthesizer,
> you'll need band-limited oscillators along the lines of:
>
> http://www-ccrma.stanford.edu/%7Estilti/papers/blit.pdf


Maybe you're not aware that we already have high-quality band-limited
oscillators implemented in WebKit right now.



>
>
> ... as well as other novel types of filters and processing units not
> included in the current spec. To get a sense of the kind of building blocks
> provided by a sophisticated synthesis toolkit, have at what ships with
> popular development environments like SuperCollider and Max/MSP:
>
> http://doc.sccode.org/Guides/Tour_of_UGens.html
> http://cycling74.com/docs/max5/vignettes/core/msp_alphabetical.html


I respect these systems very much and even worked with James McCartney
(inventor of SuperCollider) at Apple.  I've researched these (as well as
Csound and Pure Data) and others.  There are common fundamental synthesis
and processing building blocks that are common to all these systems which
had a big influence on the design of the Web Audio API.


>
>
> We can't possibly shoehorn these all into a spec that would be manageable
> for all browser vendors to implement, so it's clear that if we want to
> enable innovative new sounds on the Web, JavaScript nodes are going to be a
> critical part of it.


I agree, I never suggested that we implement *all* of the esoteric
processing algorithms of SuperCollider, for example.  JavaScriptAudioNode
is important...


> The more that the spec can expose the fast underlying primitives of the
> Web Audio APIs' to the JavaScript author (FFTs, the convolution engine,
> etc.), as well as supporting worker-based synthesis, the better the
> experience will be for everyone.
>

I think we *are* exposing the most important primitives in the Web Audio
API already.  And we've already discussed that worker-based processing is a
good thing, which should be part of the spec.

Chris

Received on Tuesday, 22 May 2012 21:01:05 UTC