W3C home > Mailing lists > Public > public-web-perf@w3.org > August 2011

Re: [Performance Timeline] Need higher resolution timers

From: James Robinson <jamesr@google.com>
Date: Mon, 22 Aug 2011 16:07:07 -0700
Message-ID: <CAD73mdL30KS-BteBUnOon-H1d9OX5zNMbfJf7+H3w7+qDFJ0Lg@mail.gmail.com>
To: James Simonsen <simonjam@chromium.org>
Cc: public-web-perf <public-web-perf@w3.org>
On Mon, Aug 22, 2011 at 3:31 PM, James Simonsen <simonjam@chromium.org>wrote:

> Hi web-perf,
> So far, we've spec'd the Performance Timeline to use 64-bit ints of
> milliseconds. This has mostly been so that the times look like Date.now().
> It's also sufficient for network timing.
> However, looking longer term, there's a need for more precision. One
> example is graphics, where milliseconds are already insufficient for
> measuring frame rate. Down the road, as games and apps get more
> sophisticated, we can expect people to want to time things within a frame.
> Switching to a double seems like the easiest solution to adding more
> resolution. However, I'm a bit worried we're running out of bits. Date.now()
> returns milliseconds since 1970 and we're currently spec'd to use the same
> offset. If we switch to double, we've already consumed 40 of the 52 bits
> available just measuring milliseconds since 1970. Getting to microsecond
> resolution leaves us with only 2 spare bits. That seems a bit tight.

Additionally, the offset does not make any sense if the system clock changes
relative to the monotonic clock.

> If we throw out that 1970 offset, we can get much higher resolution times.
> I propose we just measure time since initial root document navigation (and
> hope nobody leaves the same page open for 40 years). It could be stored as a
> double of milliseconds.
To be clear, this would apply to all timestamps in the timeline - ones from
network timing and user timing - right?

- James

> James
Received on Monday, 22 August 2011 23:07:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:01:09 UTC