- From: Alice <notifications@github.com>
- Date: Fri, 19 Jul 2019 14:27:55 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
Received on Friday, 19 July 2019 21:28:17 UTC
We're wondering what the effective precision of this metric is, given the guidelines you outline above and this comment in the explainer: > It is intended that the LS score have a correspondence to the perceptual severity of the instability, but not that all user agents produce exactly the same LS scores for a given page. If that is the case, would it make sense to expose this as, say, an integer between 1 and 10 (assuming variance within those 10 buckets is essentially noise), and truncate values higher than 10 to 10? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/w3ctag/design-reviews/issues/393#issuecomment-513383873
Received on Friday, 19 July 2019 21:28:17 UTC