Re: [whatwg] Proposal: navigator.cores

On Tue, 06 May 2014 01:29:47 +0200, Kenneth Russell <kbr@google.com> wrote:

> Applications need this API in order to determine how many Web Workers
> to instantiate in order to parallelize their work.
>

On Tue, 06 May 2014 01:31:15 +0200, Eli Grey <me@eligrey.com> wrote:

> I have a list of example use cases at
> http://wiki.whatwg.org/wiki/NavigatorCores#Example_use_cases
>
> (...)

I guess everyone that is reading this thread understands the use cases  
well and agrees with them.

The disagreement is what kind of API you need. Many people, rightly so,  
have stated that a core count gives little information that can be useful.

It's better to have an API that determines the optimal number of parallel  
tasks that can run, because who knows what else runs in a different  
process (the webpage the worker is in, the browser UI, plugins, other  
webpages, iframes, etc) with what load. Renaming 'cores' to  
'parallelTaskCount' would be a start.

On Tue, 06 May 2014 01:31:15 +0200, Eli Grey <me@eligrey.com> wrote:

> Also, allowing webapps to set thread priority is very dangerous and
> can cause system lockup.
>

Nobody mentioned giving direct unsanatized access to setting low level  
thread/process priority.

On Tue, 06 May 2014 01:29:47 +0200, Kenneth Russell <kbr@google.com> wrote:

> A prioritization API for Web Workers won't solve this problem, because
> all of the workers an application requests must actually be spawned
> for correctness purposes. There's no provision in the web worker
> specification for allocation of a web worker to fail gracefully, or
> for a worker to be suspended indefinitely. Even if a worker had its
> priority designated as "low", it would still need to be started.

You're identifying limitations in the workers API. That's good. Would be  
very useful to add the missing bits you're asking to the worker API.

> On 32-bit systems, at least, spawning too many workers will cause the
> user agent to run out of address space fairly quickly.

That seems like a user agent with a bad implementation. Limiting the  
resources that an unprivileged application can use on the system is a  
really old idea. That's already done in browsers with disk quotas for  
local storage, databases and application cache, for instance.

I wouldn't want to use that browser either. Typing a url into the address  
bar does not imply that I give the web page the permission to hog my  
computer or device with excess CPU and memory usage.

Received on Tuesday, 6 May 2014 11:58:13 UTC