- From: Martin Thomson <notifications@github.com>
- Date: Mon, 09 Jun 2025 04:06:24 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/issues/878/2955467486@github.com>
martinthomson left a comment (w3ctag/design-reviews#878) As is often the case, I was focusing too narrowly. Let's put this all in perspective. The fundamental problem with the underlying API is its usefulness for fingerprinting. With good performance metrics, you are able to segregate a user population by various performance characteristics: the speed at which connections are created, the speed at which they download content, the speed that they perform layout, and the speed that they execute JavaScript. A site visitor in the slowest 10% of visitors to one site is probably also in the slowest 10% of visitors to another site. (That the spec has [a privacy considerations section](https://w3c.github.io/navigation-timing/#privacy) that fails to acknowledge this is a problem.) Much of this is already observable on the platform, but the API makes these things trivial to measure as a side-effect of loading content. The performance timing API therefore takes what might otherwise require active fingerprinting and serves up the information with no special effort required. The main thing that stops this from being a reliable source of fingerprinting information is the sheer amount of variability in the metrics. That variability is what allows us to pretend "this is fine". As long as we're in a fingerprinting-rich environment - or we have browsers with third party cookies - maybe we can maintain that pretense. What is unique with the addition is that "confidence" is reflecting a value that doesn't have the inherent noise of all the rest of the timing information. Making the "confidence" parameter noisy only makes it necessary to collect multiple samples (epsilon values increase over multiple observations, eliminating noise). It seems to me like you want to encourage sites to take multiple observations across a user population rather than of a single user, but there are no inbuilt systems that prevent abuse. Either way, it looks like the problem is inherent in the underlying API, less so the proposed addition. But if the underlying API is bad for privacy, can we endorse extending it further to make it worse? -- Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/issues/878#issuecomment-2955467486 You are receiving this because you are subscribed to this thread. Message ID: <w3ctag/design-reviews/issues/878/2955467486@github.com>
Received on Monday, 9 June 2025 11:06:28 UTC