Re: Sample-accurate JS output (was: scheduling subgraphs)

Joe,

I understand that it could be implemented to work as you suggest without
adding a large amount of code, but the point is that there could still be a
large amount of traffic between the audio thread and the main thread with
large numbers of event listeners being fired near the same time (for
overlapping notes).  The handling of timers and event listeners on the main
thread is fairly dicey and is in competition with page rendering and other
JavaScript running there.  There's also garbage collection which can stall
for significant amounts of time.  I know that to some extent we're already
accepting this scenario by having a JavaScriptAudioNode in the first place.
 But, the API  system you're proposing encourages the possibility of many
more event listeners needing to be serviced in a short span of time.

That said, you're free to take the WebKit audio branch code and try some
experiments there.  My concern is mostly oriented around the reliability and
robustness of the system when pushed in different ways, run on a variety of
platforms (slow and fast), and combined with other stuff going on in the
rendering engine like WebGL and canvas drawing.

Chris

On Tue, Oct 19, 2010 at 1:33 PM, Joseph Berkovitz <joe@noteflight.com>wrote:
>
> I wasn't thinking that JavaScriptAudioNodes would be created and destroyed
> dynamically as you described -- I know that's a nightmare.  I was just
> thinking that we would make it much easier for programmers to keep track of
> when programmatic output starts and stops, as follows:
>
> 1) Only dispatch AudioProcessingEvents when there is an intersection
> between the node's scheduled time range and the time range being currently
> rendered.  Nodes would exist all the time.  If an event isn't dispatched,
> the node is considered to generate a zero-filled buffer.
>
> 2) Provide an offset playback time in the AudioProcessingEvent which maps
> the node's scheduled start time to a value of zero.
>
> 3) Allow the node to synthesize a "partial batch" of samples when it starts
> or ends in the middle of a sample batch.  Easily accomplished by also having
> AudioProcessingEvent pass the number of expected samples in an attribute.
>  If this is less than the node's bufferSize, the framework will zero-pad the
> partial batch on the left (if starting in mid-batch) or right (if stopping
> in mid-batch).
>
> This makes it incredibly simple to write scheduled programmatic output and
> it's my hope that it doesn't impose anything on the framework except some
> minimal book-keeping and conditional handling.
>
> Possibly this justifies something called JavaScriptPlaybackNode, which
> extends the existing JSAudioNode class by adding a noteOn()/startAt()
> method, since such a method does not make sense for a filtering node.
>
> ... .  .    .       Joe
>
> *Joe Berkovitz*
> President
> Noteflight LLC
> 160 Sidney St, Cambridge, MA 02139
> phone: +1 978 314 6271
> www.noteflight.com
>
>
>
>
>
>

Received on Tuesday, 19 October 2010 21:27:51 UTC