W3C home > Mailing lists > Public > public-audio@w3.org > July to September 2013

Re: [web-audio-api] Need a way to determine AudioContext time of currently audible signal (#12)

From: Olivier Thereaux <notifications@github.com>
Date: Wed, 11 Sep 2013 07:29:32 -0700
To: WebAudio/web-audio-api <web-audio-api@noreply.github.com>
Message-ID: <WebAudio/web-audio-api/issues/12/24244198@github.com>
> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=20698#23) by Pierre Bossart on W3C Bugzilla. Fri, 26 Apr 2013 20:11:57 GMT

I would like to suggest a different approach, which would solve both the latency and drift issues by adding 4 methods:

triggerTime() // TSC when audio transfers started, in ns
currentSystemTime() // current system time (TSC), in ns
currentRendererTime() // time reported by audio hardware (in ns), reset to zero when transfer starts
currentTime() // audio written or read to/from audio stack (in ns)-> same as today

With these 4 methods, an application can find the latency by looking at currentTime()-currentRendererTime(). If a specific implementation doesn't actually query the hardware time, then it can implement a fixed os/platform offset.

Now if you want to synchronize audio with another event, you have to monitor the audio/system time drift, which can be done by looking at (currentSystemTime()-triggerTime())/currentRendererTime()

Reply to this email directly or view it on GitHub:
Received on Wednesday, 11 September 2013 14:30:03 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:03:24 UTC