Re: Starting

On Tue, May 7, 2013 at 10:01 PM, Srikumar Karaikudi Subramanian <
srikumarks@gmail.com> wrote:

>
> On 7 May, 2013, at 6:58 PM, Joseph Berkovitz <joe@noteflight.com> wrote:
>
>
> playbackTime isn't something that is "accurate" or "inaccurate".
>  playbackTime is completely deterministic since it describes a sample
> block's time relationship with other schedulable sources in the graph, not
> the actual time at which the sample block is heard. So it has nothing to do
> with buffering. The value of playbackTime in general must advance by
> (bufferSize/sampleRate) in each successive call, unless blocks of samples
> are being skipped outright by the implementation to play catch-up for some
> reason.
>
> Of course any schedulable source whatsoever has a risk of being delayed or
> omitted from the physical output stream due to unexpected event
> handling latencies. Thus, playbackTime (like the argument to
> AudioBufferSourceNode.start()) is a prediction but not a guarantee. The job
> of the implementation is to minimize this risk by various buffering
> strategies, but this does not require any ad-hoc adjustments to
> playbackTime.
>
>
> Many years ago when I was looking at audio-visual synchronization
> approaches for another system, one of the easiest to understand approaches
> I found was the "UST/MSC/SBC" approach described in the Khronos OpenML
> documents [1]. In essence, it says (according to my understanding) that
> every signal coming into the computing system is time stamped w.r.t. when
> it arrived on some "input jack", and every computed signal intended to
> leave the system is time stamped w.r.t. when it will leave the respective
> "output jack". This holds for both video and audio signals.
>
> Whether a signal actually leaves the system at the stamped time is up to
> the scheduler and the other system constraints, but from the perspective of
> the process computing the signal, it has done its job once the time stamp
> is set.
>
> UST/MSC/SBC may serve as an adequate framework for explaining the various
> time stamps in the system and the relationships between them, as well as
> provide an API to the various schedulers. We already have to deal with
> three now - graphics, audio samples and MIDI events.
>

How much of that is implementable in the practice across the wide range of
the available devices and operating systems in existence today and
conceivable in the future?

--
Ehsan
<http://ehsanakhgari.org/>

Received on Thursday, 9 May 2013 03:04:37 UTC