Re: Determining Output Latency

On Wed, Jan 16, 2013 at 9:51 AM, Joseph Berkovitz <joe@noteflight.com>wrote:

> Hi Chris,
>
> It's become apparent that on some devices and Web Audio implementations,
> an AudioContext's currentTime reports a time that is somewhat ahead of the
> time of the actual audio signal emerging from the device, by a fixed
> amount.  To be more specific, if a sound is scheduled (even very far in
> advance) to be played at time T, the sound will actually be played when
> AudioContext.currentTime = T + L where L is a fixed number which for the
> purposes of this email I'll call "output latency".
>
> I think the only reason this hasn't been noticed before is that until
> recently, the output latency on the implementations that I've been exposed
> to has been too small to notice. But in some implementations it can be
> substantial and noticeable.
>
> When this occurs, is this 1) a problem with the implementation of the
> spec, or 2) an anticipated phenomenon that may vary from one implementation
> to another?
>
> If the answer is 1), then at a minimum the spec needs to clarify the
> meaning of context.currentTime with respect to physical audio playback so
> that implementors realize they must add L back into the reported value of
> currentTime to make it correct.  But if the answer is 2), then we have a
> different problem: there doesn't appear to be any way to interrogate the
> API to determine the value of L on any particular platform.
>
> Can you or others on the list provide any guidance on this point? Should I
> file a bug and, if so, what for?
>
> Best,
>
> ... .  .    .       Joe
>

Hi Joe, the general idea is that the underlying different platforms/OSs can
have very different latency characteristics, so I think you're looking for
a way to query the system to know what it is.  I think that something like
AudioContext.presentationLatency is what we're looking for.  Presentation
latency is the time difference between when you tell an event to happen and
the actual time when you hear it.  So, for example, with source.start(0),
you would hope to hear the sound right now, but in reality will hear it
with some (hopefully) small delay.  One example where this could be useful
is if you're trying to synchronize a visual "playhead" to the actual audio
being scheduled...

I believe the goal for any implementation should be to achieve as low a
latency as possible, one which is on-par with desktop/native audio software
on the same OS/hardware that the browser is run on.  That said, as with
other aspects of the web platform (page rendering speed, cache behavior,
etc.) performance is something which is tuned (and hopefully improved) over
time for each browser implementation and OS.

Chris

Received on Wednesday, 16 January 2013 20:18:41 UTC