- From: <bugzilla@jessica.w3.org>
- Date: Fri, 26 Apr 2013 20:11:57 +0000
- To: public-audio@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=20698
Pierre Bossart <pierre-louis.bossart@linux.intel.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |pierre-louis.bossart@linux.
| |intel.com
--- Comment #24 from Pierre Bossart <pierre-louis.bossart@linux.intel.com> ---
I would like to suggest a different approach, which would solve both the
latency and drift issues by adding 4 methods:
triggerTime() // TSC when audio transfers started, in ns
currentSystemTime() // current system time (TSC), in ns
currentRendererTime() // time reported by audio hardware (in ns), reset to zero
when transfer starts
currentTime() // audio written or read to/from audio stack (in ns)-> same as
today
With these 4 methods, an application can find the latency by looking at
currentTime()-currentRendererTime(). If a specific implementation doesn't
actually query the hardware time, then it can implement a fixed os/platform
offset.
Now if you want to synchronize audio with another event, you have to monitor
the audio/system time drift, which can be done by looking at
(currentSystemTime()-triggerTime())/currentRendererTime()
--
You are receiving this mail because:
You are the QA Contact for the bug.
Received on Friday, 26 April 2013 20:11:58 UTC