Re: AudioNode API Review - Part 1 (StandingWave3 Comparison)

Hi all, this is Ian Ni-Lewis from Google. I've been working with Chris a
little bit. I also did a fair amount of work on audio for the Xbox 360.

On Mon, Oct 4, 2010 at 2:47 PM, Joseph Berkovitz <joe@noteflight.com> wrote:

> Hi folks,
> Loop Points***: SW3 allows a single loop point to be specified for an audio
> buffer.  This means that the loop "goes back" to a particular nonzero sample
> index after the end is reached.  This feature is really essential for
> wavetable synthesis, since one is commonly seeking to produce a simulated
> note of indefinite length by looping a rather featureless portion of an
> actual note being played, a portion that must follow the initial attack.
>

I agree, this is a very important feature.


>
> Resampling / Pitch Shifting: SW3 uses an explicit filter node
> (ResamplingFilter) which resamples its input at an arbitrary sampling rate.
> This allows any audio source to be speeded up/slowed down (making its
> overall duration shorter/longer).  Contrast this with the Web API, in which
> AudioBufferSourceNode "bakes in" resampling, via the playbackRate attribute.
>  It appears that in the Web API no composite source or subgraph can be
> resampled.  Now, the Web API approach would actually be sufficient for
> Noteflight's needs (since we only apply resampling directly to audio
> buffers) but it's worth asking whether breaking this function out as a
> filter is useful.
>

If you allowed filters to resample, wouldn't you also have to allow input
and output buffers of variable size? One of the virtues of the current
design seems to be that buffer sizes can be kept constant throughout the
graph.


> Looping-as-Effect: SW3 also breaks out looping as an explicit filter node,
> allowing any composite source to be looped.
>

Again, this seems to require more complex input/output logic between the
nodes. I get the feeling that your downstream filters get to pull inputs on
demand, rather than having their inputs handed to them by the upstream
filters. True?


SEQUENCING***
>
> SW3 uses a very different approach to time-sequencing of audio playback to
> the Web API's noteOn(when) approach.  I feel that each approach has distinct
> strengths and weaknesses.  This is probably the biggest architectural
> difference between the projects.
>
>
I agree that we shouldn't preclude more complex sequencing. But does that
need to be part of the core API? Or is it something that can be built on top
of a simpler time-based API?


> The net result is that in the Web API approach, if you want to encapsulate
> knowledge of a subgraph's internals, you have to pass an onset time into the
> code that makes that subgraph.  This doesn't seem good to me because it
> conflates the construction and the scheduling of a complex sound.  I am
> still thinking about what to recommend instead (other than just adding a
> Performance-like construct to the Web API), but would first like to hear
> others' reaction to this point.
>

How difficult would it be to write a Performance-like construct in JS on top
of the existing proposal? If it's doable, then I'd vote (not that i have a
vote, I'm just a lurker here :-) ) for standardizing the core API and
letting third parties add better sequencing after the fact. I've seen other
standards get horribly bogged down by trying to be everything to everyone,
and I'd hate to see that happen here.


-- 
Ian Ni-Lewis
Developer Advocate
Google Game Developer Relations

Received on Tuesday, 5 October 2010 07:24:23 UTC