RE: Specifying window.performance.now()

David,

Thank you for taking the time to provide feedback.

> [4.2] "A DOMHighResTimeStamp represents a number of milliseconds"
>
> Was milliseconds chosen for compatibility with other timing APIs? Seconds seems nicer conceptually 
> but I understand if all the other related APIs are already ms.

Considering one of the goals of this new API is to provide sub-millisecond resolution, using seconds as the unit seems less appropriate. Additionally, most other time values in the web platform are provided in milliseconds. E.g., converting from Date.now() to the new timebase would require only subtracting performance.timing.navigationStart and no additional unit converting. 

> [4.2] "represents a number of milliseconds accurate to at least a tenth of a millisecond."
>
> I'm curious about the rationale for this requirement. The users I talked to seemed to just 
> want the best precision possible on the given system, so they wouldn't need such a requirement. 
> But I don't see a problem with having the requirement.

Seeing that a tenth of a millisecond is the minimum amount to be considered sub-millisecond, I decided to use that as the minimum requirement. Considering DOMHighResTimeStamp is bound to the JavaScript double, vendors can provide much more accuracy. I think anything greater than a thousandth of a millisecond (microsecond) should be more than sufficient for timing analysis needs.

> [4.3] "On getting, the now attribute MUST return the number of milliseconds from the start of the 
> navigation of the root document to the occurrence of the call to the now attribute."
> 
> The main application of Performance.now seems to be relative timing, which doesn't particularly 
> require any origin. It seems to me to be a bit easier to allow an arbitrary origin, so that the 
> implementation doesn't have to track a zero time and subtract. Do some important applications 
> require zero to be start of navigation?

For relative timing, the timebase doesn't matter as long as the two time values being compared have the same origin. However, for the Timing specifications, absolute times would be desired. Start of the navigation seemed like the most appropriate origin.

As mentioned in an earlier thread, for the Performance Timeline, having all timestamps based on the beginning of the navigation of the root document (navigationStart) makes it easier to visually parse timestamps, since everything is 0-based instead of current Unix-epoch-time based.

Before, looking at a ResourceTiming entry that started 100ms after navigationStart (@ 1314223489090 epoch):
startTime:   1314223489190
responseEnd: 1314223489193

After switching to navigationStart-based sub-millisecond timestamps, it's easier to quickly see that this resource was requested 100ms into the navigation and took a total of 20ms, instead of trying to do math with 13-digit numbers.
startTime:   100.000
responseEnd: 120.000

> [4.4] "MUST be monotonically increasing and not subject to system clock adjustments or system clock skew"
>
> Just to make sure I understand, does "not subject to system clock adjustments" mean that two calls to 
> Performance.now must measure the true wall clock interval even if the system clock was reset by the user in between?
>
> This seems to be a good requirement in principle but I'm not deeply knowledgeable about how easy it is to implement
> on different platforms. 

Yes, the goal is that the delta of two subsequent time values should never be negative, as we have sometimes seen in Navigation Timing before we required a monotonic clock.

I believe most operating system provide a way to get monotonically increasing time. In Windows, QueryPerformanceCounter() and GetSystemTimeAsFileTime() should not be impacted by clock skew or adjustment. 

> [4.5] "it does not make this privacy concern significantly worse than it was already."
> 
> That's also what I've been told. :-) The argument was that timing attacks work fine 
> with low-resolution timers, they just take longer, so having only low-resolution timers 
> just gives a false sense of security. That makes sense to me.

You do not need sub-millisecond resolution to determine whether you had a cache hit or a miss, so this API doesn't make statistical fingerprinting any worse.

Thanks,
Jatinder

Received on Wednesday, 29 February 2012 00:57:43 UTC