Re: Starting

> 
> ScriptProcessorNode buffers its input and only dispatches the audioprocess event when a buffer of bufferSize samples has been filled up, so in the best case, each ScriptProcessorNode in the graph adds bufferSize/sampleRate seconds of delay.  Now, when the implementation wants to dispatch the audioprocess event, it needs to calculate the playbackTime value.  Note that at this point, the implementation doesn't know how long it's going to take for the event to be handled, so roughly speaking it calculates playbackTime to be equal to currentTime + bufferSize/sampleRate.  This is in practice a guess on part of the implementation that the event handling will be finished very soon with a negligible delay.  Now, let's for the sake of this example say that the web page takes 100ms to handle the event.  Once the event dispatch is complete, we're not 100ms late to playback the outputBuffer, which means that the buffer will be played back at currentTime + bufferSize/sampleRate + 0.1 *at best*.  Now, a good implementation can remember this delay, and the next time calculate playbackTime to be currentTime + bufferSize/sampleRate + 0.1, and basically accumulate all of the delays seen in dispatching the previous events and adjust its estimate of playbackTime every time it fires an audioprocess event.  But unless the implementation can know how long the event handling phase will take it can never calculate an accurate playbackTime, simply because it cannot foresee the future!
> 
> Actually, I'm quite sure it can exactly calculate this value, but I'd rather discuss it in the meeting, since I fear it might be too complicated to explain quickly right now.

I'm all for discussing it in the meeting (and not explaining it too hastily) but the gist of my thinking is this:

playbackTime isn't something that is "accurate" or "inaccurate".  playbackTime is completely deterministic since it describes a sample block's time relationship with other schedulable sources in the graph, not the actual time at which the sample block is heard. So it has nothing to do with buffering. The value of playbackTime in general must advance by (bufferSize/sampleRate) in each successive call, unless blocks of samples are being skipped outright by the implementation to play catch-up for some reason.

Of course any schedulable source whatsoever has a risk of being delayed or omitted from the physical output stream due to unexpected event handling latencies. Thus, playbackTime (like the argument to AudioBufferSourceNode.start()) is a prediction but not a guarantee. The job of the implementation is to minimize this risk by various buffering strategies, but this does not require any ad-hoc adjustments to playbackTime.

…Joe

Received on Tuesday, 7 May 2013 13:28:33 UTC