Re: [whatwg] Proposal: navigator.cores

Adam Barth <w3c@adambarth.com> writes:

> Over on blink-dev, we've been discussing [1] adding a property to navigator
> that reports the number of cores [2].  As far as I can tell, this
> functionality exists in every other platform (including iOS and Android).
>  Some of the use cases for this feature have been discussed previously on
> this mailing list [3] and rejected in favor of a more complex system,
> perhaps similar to Grand Central Dispatch [4].  Others have raised concerns
> that exposing the number of cores could lead to increased fidelity of
> fingerprinting [5].
>
> My view is that the fingerprinting risks are minimal.  This information is
> already available to web sites that wish to spend a few seconds probing
> your machine [6].  Obviously, exposing this property makes that easier and
> more accurate, which is why it's useful for developers.

This script pushed system load to 2.5 for 30 seconds and then estimated
I have 4 cores – when running on an old Thinkpad T60 inside Chromium 33.

Fact: A sticker on the laptop says “Core Solo”. /proc/cpuinfo agrees.

I think it would be a bad idea to expose this information to web
applications in an easier and more accurate way for two reasons:

First, this enables web application authors to discriminate unduely. Web
authors often try to enforce minimal specifications on User Agents if it
suits their purpose. Authors often lie to make users more susceptible to
tracking and advertising. Enabling authors to discriminate moves the web
away from a universal medium – see for example UA string discrimination.

I do not want users to see “sorry, this web page needs at least 2 cores”
– it will most likely be a lie, similar to messages informing users that
they need to allow cookies and JavaScript just to read a web page.

Additionally, from a game theory standpoint, I think it is a bad idea to
expose the processing resources a client has to applications at all. If
a non-malicious application author wants to know the number of cores,
then for the purpose of using as much processing power as possible.

Applications that grab a maximum number of resources do not play well
with other applications that may or may not follow the same strategy.

I think a good real-world example how a computing system can handle
constrained resources is the OOM killer in Linux. Its simple: If your
application uses too much memory, it will be terminated with prejudice.

Of course, authors can not know what “too much” means – systems are too
diverse for that. This gives an incentive to err on the side of caution,
as trying to allocate and use all available memory is most likely fatal.

-- 
Nils Dagsson Moskopp // erlehmann
<http://dieweltistgarnichtso.net>

Received on Monday, 5 May 2014 01:31:13 UTC