Re: Sample-accurate JS output (was: scheduling subgraphs)

On Oct 19, 2010, at 4:11 PM, Chris Rogers wrote:
>
> My thinking has been that the JavaScriptAudioNodes would not be  
> created and destroyed at a fine time granularity.  There are  
> extremely complex issues with buffering and scheduling from the real- 
> time audio thread to the main thread where JS and page-rendering  
> occur.  The reliability of the timing of event listeners is not 100%  
> certain, and at the buffer sizes necessary to avoid audio glitching  
> there are latency factors which come into play.  So, although I  
> agree that we'll need a "playbackTime" attribute for sample-accurate  
> synchronization of JS processing versus the rest of the audio graph,  
> I'm a bit concerned about the idea of having dozens of  
> JavaScriptAudioNodes getting created and destroyed in a short amount  
> of time with the expectation that it will all happen with perfect  
> timing.  Instead, I would propose that JavaScriptAudioNodes are  
> created at the beginning and remain running as long as needed.  They  
> can do whatever synchronized rendering they want, including  
> generating silence if it's in between time to play any events.  I  
> know that you will probably consider this much less elegant than the  
> system that you're proposing.  But, practical implementation details  
> and reliability are really important here.  And this simpler  
> approach does not limit, in any respect, the types of applications  
> which could be created.

I wasn't thinking that JavaScriptAudioNodes would be created and  
destroyed dynamically as you described -- I know that's a nightmare.   
I was just thinking that we would make it much easier for programmers  
to keep track of when programmatic output starts and stops, as follows:

1) Only dispatch AudioProcessingEvents when there is an intersection  
between the node's scheduled time range and the time range being  
currently rendered.  Nodes would exist all the time.  If an event  
isn't dispatched, the node is considered to generate a zero-filled  
buffer.

2) Provide an offset playback time in the AudioProcessingEvent which  
maps the node's scheduled start time to a value of zero.

3) Allow the node to synthesize a "partial batch" of samples when it  
starts or ends in the middle of a sample batch.  Easily accomplished  
by also having AudioProcessingEvent pass the number of expected  
samples in an attribute.  If this is less than the node's bufferSize,  
the framework will zero-pad the partial batch on the left (if starting  
in mid-batch) or right (if stopping in mid-batch).

This makes it incredibly simple to write scheduled programmatic output  
and it's my hope that it doesn't impose anything on the framework  
except some minimal book-keeping and conditional handling.

Possibly this justifies something called JavaScriptPlaybackNode, which  
extends the existing JSAudioNode class by adding a noteOn()/startAt()  
method, since such a method does not make sense for a filtering node.

... .  .    .       Joe

Joe Berkovitz
President
Noteflight LLC
160 Sidney St, Cambridge, MA 02139
phone: +1 978 314 6271
www.noteflight.com

Received on Tuesday, 19 October 2010 20:33:49 UTC