Re: Sample-accurate JS output (was: scheduling subgraphs)

Hi Joe, thanks for your questions.  You bring up some good points.

On Wed, Oct 20, 2010 at 12:45 AM, Joe Turner <joe@oampo.co.uk> wrote:

> Hi Chris,
> Just a last couple of points of clarification from me.  I think I'm
> getting there!  As the idea is that we minimise the number of
> JSAudioNodes (in theory only running one per app), what is the idea
> for having different graph routing on generated sounds?  For example
> if I want a saw wave at 100hz running throught a low-pass filter, and
> another saw wave at 400hz running through a reverb what would I do?
> Would it be a matter of having a JSAudioNode with multiple channels,
> then routing each channel differently?
>

Having a single JavaScriptAudioNode with multiple channels would be one
approach, but it clearly seems less elegant than having more than
one JavaScriptAudioNode.  Even now, the API allows for more than one, albeit
without some of the fancier scheduling smarts that Joe B has suggested.  I'm
concerned about complex implementation details and performance problems in
dealing with this case, but I've considered this problem some more and think
there may be some interesting tricks to get around some of the issues I was
worried about.  If multiple simultaneous JavaScriptNodes can be made to work
well (at least no worse than a single JavaScriptAudioNode doing equivalent
work) then I think it would be very powerful, especially combined with Joe
B's ideas.  But, I think we need to proceed cautiously here until it can be
shown to work.


>
> On scheduling again, is the idea that we take the new "playbackTime"
> attribute of an AudioProcessingEvent, and use that to "tick" a
> JavaScript scheduler.  I think this is how synchronisation between JS
> and native timing should work - is that correct?
>

Yes, that's exactly right.


> Also, if I was looking to synchronise visuals with
> AudioBufferSourceNodes scheduled using noteOn, would I then need to
> have a JSAudioNode to use for scheduling the visuals, and just send
> zeros for the sound data?  This sounds a little counterintuitive
> (although not difficult or harmful really).
>

I suppose that's one approach.  There's also been discussion of an event
listener which could get called at a specific context time.  A third
approach would be "polling", where the drawing code is already drawing every
frame and simply queries the context's current time and draws accordingly.


>
> Finally just to echo Joe B's sentiment - I really appreciate the work
> you're putting in.  I'm mainly asking questions because I'm excited to
> see what will be possible.
> Cheers,
> Joe


Thanks, I really appreciate the support.

Chris

Received on Wednesday, 20 October 2010 19:26:20 UTC