- From: Marcos Cáceres <notifications@github.com>
- Date: Tue, 31 Mar 2026 22:20:52 -0700
- To: w3ctag/design-reviews <design-reviews@noreply.github.com>
- Cc: Subscribed <subscribed@noreply.github.com>
- Message-ID: <w3ctag/design-reviews/issues/1198/4167532657@github.com>
marcoscaceres left a comment (w3ctag/design-reviews#1198)
@nickie, just wondering if you've fed this to Gemini for a review?... it confirmed a lot of concerns for me personally that I had with the proposal.
Here's a prompt. Hope that helps in the meantime while we do human reviews.
<details>
# Gemini prompt: `navigator.cpuPerformance` self-review
---
You are a critical reviewer of web platform proposals. Your job is to find
real problems, not to validate the proposal. Do not assume good design because
the proposal exists. Apply the following tests rigorously.
**Proposal:** https://github.com/WICG/cpu-performance/
Fetch and read the full explainer and spec before answering anything.
---
## Test 1: Use-case coverage by existing APIs
For each use case listed in the proposal, determine whether an existing W3C
or WHATWG API already provides a more direct signal. Specifically:
- For video/media use cases: evaluate `MediaCapabilities.decodingInfo()` and
`VideoDecoder.isConfigSupported()` (WebCodecs). Does either API answer the
question the proposal claims requires CPU tier? Quote the relevant spec text
for each.
- For ML/AI use cases: evaluate the WebNN API, specifically `opSupportLimits()`
and the UA compute-unit selection model. Does WebNN already address on-device
vs. server-side inference decisions without requiring CPU tier? Quote the spec.
- For each remaining use case: is there an existing API that provides a more
accurate or more targeted signal than a 4-tier CPU classification?
Conclude: which use cases, if any, are genuinely unmet by existing APIs?
---
## Test 2: The reproducibility claim vs. OS reality
The spec requires the same device to always return the same tier regardless of
current system state ("reproducibility: independent of current load").
Evaluate this claim against:
- Thermal throttling behaviour on mobile SoCs (ARM big.LITTLE, Apple M-series,
Qualcomm Snapdragon)
- OS power profiles (Windows Balanced/Performance/Power Saver, Android Doze,
iOS Low Power Mode)
- Battery state effects on CPU boost duration
- Background process load (OS indexing, antivirus, system updates)
Is "reproducibility independent of current load" achievable in practice? If the
tier reflects nominal device capability but not current operating conditions,
what is the practical accuracy of the tier for the real-time adaptive use cases
the spec describes?
---
## Test 3: Privacy and fingerprinting surface
The spec says it was designed with privacy in mind. Evaluate:
- What is the information gain from `navigator.cpuPerformance` when combined
with `navigator.hardwareConcurrency`, `navigator.deviceMemory`, and WebGL
renderer strings? Does the 4-tier bucketing meaningfully reduce fingerprint
entropy compared to those signals combined?
- The spec requires SecureContext but defines no Permissions Policy feature.
What does this mean for cross-origin iframes and third-party scripts?
- The tier is stable by design ("no reclassification"). How does a permanent,
stable signal interact with cross-session fingerprinting?
- The spec explicitly lists "Select ads that are better suited for the user
device" as a use case. Tier 1 devices correlate with lower-income users.
Is there a consent mechanism for this use? Should there be?
---
## Test 4: Abstraction level
Chrome internally classifies devices into performance tiers for rendering
heuristics (compositor thread budgets, animation scheduling, etc.). Safari does
the same.
- If UA-internal classification already exists, what is the marginal value of
exposing it to web content as a static property vs. as a dynamic media query
(e.g. `@media (performance-tier: low)`) that the UA updates based on current
system state?
- `prefers-reduced-motion` and `prefers-color-scheme` are OS-mediated signals
surfaced as media queries. What are the tradeoffs between that model and the
`navigator.cpuPerformance` model for this use case?
---
## Output format
For each test:
1. State your verdict: **passes** / **fails** / **partially passes**
2. Give the specific evidence (quoted spec text, API behavior, OS behavior)
that supports the verdict
3. If it fails, state what would need to change for it to pass
End with an overall assessment: does the proposal justify shipping as specified,
or does it need rework? If rework, what is the minimum viable change set?
Do not hedge. If the evidence points to a problem, state the problem clearly.
</details>
--
Reply to this email directly or view it on GitHub:
https://github.com/w3ctag/design-reviews/issues/1198#issuecomment-4167532657
You are receiving this because you are subscribed to this thread.
Message ID: <w3ctag/design-reviews/issues/1198/4167532657@github.com>
Received on Wednesday, 1 April 2026 05:20:55 UTC