Re: Starting

On Mon, Apr 15, 2013 at 7:32 PM, Joseph Berkovitz <>wrote:

> Hi Stuart,
> It isn't a silly question at all since it points at the need for clearer
> documentation in the future.
> The reason that ScriptProcessorNodes do not have start and stop times is
> that they act as processors of the raw sample blocks which serve as the
> basis for the AudioContext's operation. These blocks of necessity have a
> regular size and always begin/end at sample block boundaries. Adding
> scheduling to these nodes would make their operation more complicated to
> define and would mess with their "straight to the metal" simplicity.

I think that the (sort of hackish) way to do this would be to disconnect()
the script processor node from its output, and re-connect() it when you
want to resume the playback.  We should of course spec that sending
audioprocess events must only happen when the script processor is connected
to an output, in order to make this work according to the spec, but FWIW
you should get this to work in Gecko by doing what I described.

> The scheduling of a GainNode's gain in front of your noise generator will
> work, but you could also do this within the ScriptProcessorNode although
> it's slightly more complicated. You can examine the playbackTime of each
> AudioProcessEvent, determine whether a start/stop occurs during the current
> sample block, and and begin/cease synthesis of a nonzero signal at the
> appropriate number of frames within the block.

I think this is a great idea, but I'm a bit worried about how precise it
will be given that playbackTime may not end up being entirely accurate if
the processing code takes longer than expected.  Unfortunately WebKit and
Blink do not yet implement the playbackTime attribute of
AudioProcessingEvent, but we do in Gecko, so if you ever tried this I would
be very curious to know what your experience was in practice.

> Perhaps an even simpler (and less performance intensive) way is to
> algorithmically pre-generate one or more reasonably long AudioBuffers of
> white noise and surface these as audio via one or more
> AudioBufferSourceNodes, which can be scheduled with start() and stop(). If
> you needed continuous white noise this wouldn't be so good, but since
> you're emulating drum sounds it may be a reasonable approach.

AudioBufferSourceNodes cannot be re-started once they're stopped, so you
would need to create new AudioBufferSourceNodes, and do the initial setup
on them, so I think that in practice, disconnecting and reconnecting the
script processor might be a more performant way of achieving the same


Received on Tuesday, 23 April 2013 03:54:39 UTC