- From: Glenn Maynard <glenn@zewt.org>
- Date: Wed, 14 May 2014 18:30:29 -0500
- To: Brian Birtles <bbirtles@mozilla.com>
- Cc: whatwg <whatwg@lists.whatwg.org>
On Thu, May 8, 2014 at 2:33 AM, Brian Birtles <bbirtles@mozilla.com> wrote: > (2014/05/08 0:49), Glenn Maynard wrote: > >> Can you remind me why this shouldn't just use real time, eg. using the >> Unix >> epoch as the time base? It was some privacy concern, but I can't think of >> any privacy argument for giving high-resolution event timestamps in units >> that are this limited and awkward. >> > > [1] has some justification for why we don't use 1970. As does [2]. > I'm not sure what the privacy concerns raised in the past were with > regards to 1970. > Okay, I remember. It's not that using the epoch here is itself a privacy issue, it's that the solutions to the monotonicity problem introduce privacy issues: if you add a global base time that isn't per-origin, that's a tracking vector. Maybe a solution would be to make DOMHighResTimeStamp structured clonable (or a wrapper class, since the type itself is just double). If you post a timestamp to another thread, it arrives in that thread's own time base. That way, each thread can always calculate precise deltas between two timestamps, without exposing the actual time base. (You still can't send it to a server, but that's an inherent problem for a timer on a monotonic clock.) If you treat Date.now() as your global clock, you can roughly convert > between different performance timelines but with the caveat that you lose > precision and are vulnerable to system clock adjustments. (There is > actually a method defined for converting between timelines in Web > Animations but the plan is to remove it.) > That would defeat the purpose of using high-resolution timers in the first place. -- Glenn Maynard
Received on Wednesday, 14 May 2014 23:30:58 UTC