- From: Zhiheng Wang <zhihengw@google.com>
- Date: Mon, 22 Aug 2011 15:54:25 -0700
- To: James Simonsen <simonjam@chromium.org>
- Cc: public-web-perf <public-web-perf@w3.org>
- Message-ID: <CAA1TnvVX7aiz9+z90X7PKPigmCB9BDhGvv1K2c9mFsNbXHsM3Q@mail.gmail.com>
On Mon, Aug 22, 2011 at 3:31 PM, James Simonsen <simonjam@chromium.org>wrote: > Hi web-perf, > > So far, we've spec'd the Performance Timeline to use 64-bit ints of > milliseconds. This has mostly been so that the times look like Date.now(). > It's also sufficient for network timing. > > However, looking longer term, there's a need for more precision. One > example is graphics, where milliseconds are already insufficient for > measuring frame rate. > Do you have a more specific example? > Down the road, as games and apps get more sophisticated, we can expect > people to want to time things within a frame. > IIRC, 50 msec is the threshold for human to detect any latency at all in FPS games. An app can still measure some other ops inside it. But overall, I am still not sure why an application really cares to know the exact sub-millisecond delay. cheers, Zhiheng > > Switching to a double seems like the easiest solution to adding more > resolution. However, I'm a bit worried we're running out of bits. Date.now() > returns milliseconds since 1970 and we're currently spec'd to use the same > offset. If we switch to double, we've already consumed 40 of the 52 bits > available just measuring milliseconds since 1970. Getting to microsecond > resolution leaves us with only 2 spare bits. That seems a bit tight. > > If we throw out that 1970 offset, we can get much higher resolution times. > I propose we just measure time since initial root document navigation (and > hope nobody leaves the same page open for 40 years). It could be stored as a > double of milliseconds. > > James >
Received on Monday, 22 August 2011 22:54:50 UTC