Re: AudioNode API Review - Part 1 (StandingWave3 Comparison)

Chris,

Thanks for the thoughtful response.  I am still working on Part 2 but  
want to see this part of the conversation through first!

>
> Joe, can you point me to some source code examples using your  
> StandingWave API.  The documentation looks excellent, but it's  
> always good to look at a variety of working code examples to get a  
> better feel for how the different parts work together.
>

Good point. Here's a simple example that should illustrate all the  
basic approaches in SW3:
http://joeberkovitz.com/projects/StandingWave/SimpleOutput/SimpleOutput.swf

Source is here (this is actually StandingWave2 code but it's pretty  
much the same):
http://code.google.com/p/standingwave/source/browse/trunk/example/src/SimpleOutput.mxml


> I can understand why you designed your API as you did, given that  
> the primary application is a software-based sample playback  
> synthesizer and sequencer.  Although, I believe the Web Audio API  
> will be able to support such applications (with the addition of  
> envelopes, etc.), for most applications I believe that the  
> additional complexity of the Performance approach is not that  
> useful.  And for the cases where it is, it's not that hard to  
> implement your API approach with a fairly small JS wrapper library  
> without really affecting performance.

I don't want additional complexity either, and I would agree that a  
Performance is overkill in many situations.  It further seems likely  
that with the right envelope/filter primitives added to  
AudioBufferSourceNode, Noteflight's use case would come out OK.

However I'm still not sure that a JS wrapper is going to work to  
implement a Performance if someone wants one, due to the problem of  
scheduling the innards of an arbitrarily complex chunk of nodes that  
have their own required, relative timings.  I think that we ultimately  
will need some way to apply a global time-shift to an arbitrary group  
of nodes, similar in spirit to what an SVG group transformation does  
in spatial dimensions.  Even if this is not for now, do you feel that  
this 1) makes sense as a future goal, and 2) can be achieved without  
disruption to the current API?

As a consumer of the spec I would also want a clearer idea of exactly  
how the noteOn/noteOff scheduling constructs are expected to utilize  
(or more importantly, not utilize) machine resources.  Is there a way  
to communicate this in the specification?  This strikes me as a spec  
in which it's crucial to have some sort of communication of best  
practices.

Once again, this is really great work and I'm super thrilled to be  
part of it.

regards,

... .  .    .       Joe

Joe Berkovitz
President
Noteflight LLC
160 Sidney St
Cambridge, MA 02139
phone: +1 978 314 6271
http://www.noteflight.com
http://joeberkovitz.com

Received on Tuesday, 5 October 2010 22:30:29 UTC