Re: Starting

On Tue, Apr 23, 2013 at 10:28 AM, Joseph Berkovitz <joe@noteflight.com>wrote:

>
> On Apr 22, 2013, at 11:53 PM, Ehsan Akhgari <ehsan.akhgari@gmail.com>
> wrote:
>
> On Mon, Apr 15, 2013 at 7:32 PM, Joseph Berkovitz <joe@noteflight.com>wrote:
>
>> The reason that ScriptProcessorNodes do not have start and stop times is
>> that they act as processors of the raw sample blocks which serve as the
>> basis for the AudioContext's operation. These blocks of necessity have a
>> regular size and always begin/end at sample block boundaries. Adding
>> scheduling to these nodes would make their operation more complicated to
>> define and would mess with their "straight to the metal" simplicity.
>>
>
> I think that the (sort of hackish) way to do this would be to disconnect()
> the script processor node from its output, and re-connect() it when you
> want to resume the playback.  We should of course spec that sending
> audioprocess events must only happen when the script processor is connected
> to an output, in order to make this work according to the spec, but FWIW
> you should get this to work in Gecko by doing what I described.
>
>
> Ehsan, I don't think this approach works for two reasons: 1) there is an
> unspecified time difference between the context's currentTime and the
> playbackTime of the next block to be synthesized, since processing can
> occur an arbitrary time interval prior to signal output.  2) Aside from the
> time difference, this would restrict the start and stop times of the node's
> output to block processing boundaries.
>

Hmm, ok, that's fair.

>
>
>> The scheduling of a GainNode's gain in front of your noise generator will
>> work, but you could also do this within the ScriptProcessorNode although
>> it's slightly more complicated. You can examine the playbackTime of each
>> AudioProcessEvent, determine whether a start/stop occurs during the current
>> sample block, and and begin/cease synthesis of a nonzero signal at the
>> appropriate number of frames within the block.
>>
>
> I think this is a great idea, but I'm a bit worried about how precise it
> will be given that playbackTime may not end up being entirely accurate if
> the processing code takes longer than expected.  Unfortunately WebKit and
> Blink do not yet implement the playbackTime attribute of
> AudioProcessingEvent, but we do in Gecko, so if you ever tried this I would
> be very curious to know what your experience was in practice.
>
>
> The specification for playbackTime reads as follows:
>
> "The time when the audio will be played in the same time coordinate system
> as AudioContext.currentTime. playbackTime allows for very
> tight synchronization between processing directly in JavaScript with the
> other events in the context's rendering graph."
>
> I believe that this leaves no room for playbackTime to be inaccurate. The
> value of playbackTime in an AudioProcessEvent must exactly equal the time T
> at which a sound scheduled with node.start(T) would be played
> simultaneously with the first frame of the AudioProcessEvent's sample block.
>
> I have not yet experimented with playbackTime in Gecko yet, but I
> originally proposed the feature for inclusion in the spec and the above
> definition is how it needs to work if it's to be useful for synchronization.
>

You're right about the current text in the spec, but we should probably
change it since what you're asking for is pretty much impossible to
implement.  Imagine this scenario: let's say that the ScriptProcessorNode
wants to dispatch an event with a properly calculated playbackTime.  Let's
say that the event handler looks like this:

function handleEvent(event) {
  // assume that AudioContext.currentTime can change its value without
hitting the event loop
  while (event.playbackTime < event.target.context.currentTime);
}

Such an event handler would just wait until playbackTime is passed, and
then return, and therefore it would make it impossible for the
ScriptProcessorNode to operate without latency.


> AudioBufferSourceNodes cannot be re-started once they're stopped, so you
> would need to create new AudioBufferSourceNodes, and do the initial setup
> on them, so I think that in practice, disconnecting and reconnecting the
> script processor might be a more performant way of achieving the same
> result.
>
>
> I don't think this is necessarily true, since an AudioBufferSourceNode is
> presumably a lightweight wrapper around an AudioBuffer. We have not found
> any performance difficulties with setup/teardown of large numbers of such
> nodes in either WebKit or Gecko (in the 10-100 per second regime). And in
> fact, unless this level of usage of AudioBufferSourceNode is performant,
> musical synthesis may not be practical with a given API implementation.
>

With 10-100 nodes per second you should not experience any performance
problems...

--
Ehsan
<http://ehsanakhgari.org/>

Received on Tuesday, 23 April 2013 19:12:06 UTC