Re: [web-audio-api] Need a way to determine AudioContext time of currently audible signal (#12)

> [Original comment](https://www.w3.org/Bugs/Public/show_bug.cgi?id=20698#23) by Pierre Bossart on W3C Bugzilla. Fri, 26 Apr 2013 20:11:57 GMT

I would like to suggest a different approach, which would solve both the latency and drift issues by adding 4 methods:

triggerTime() // TSC when audio transfers started, in ns
currentSystemTime() // current system time (TSC), in ns
currentRendererTime() // time reported by audio hardware (in ns), reset to zero when transfer starts
currentTime() // audio written or read to/from audio stack (in ns)-> same as today

With these 4 methods, an application can find the latency by looking at currentTime()-currentRendererTime(). If a specific implementation doesn't actually query the hardware time, then it can implement a fixed os/platform offset.

Now if you want to synchronize audio with another event, you have to monitor the audio/system time drift, which can be done by looking at (currentSystemTime()-triggerTime())/currentRendererTime()

---
Reply to this email directly or view it on GitHub:
https://github.com/WebAudio/web-audio-api/issues/12#issuecomment-24244198

Received on Wednesday, 11 September 2013 14:30:03 UTC