W3C home > Mailing lists > Public > whatwg@whatwg.org > December 2010

[whatwg] Workers: what should happen when exceeding worker limit?

From: Aryeh Gregor <Simetrical+w3c@gmail.com>
Date: Fri, 31 Dec 2010 13:38:57 -0500
Message-ID: <AANLkTin-5yYEuppiKQG6SaA_=NW3bakR9LqPzSYz1ESf@mail.gmail.com>
On Thu, Dec 30, 2010 at 7:11 PM, Ian Hickson <ian at hixie.ch> wrote:
> That's a hardware limitation, and as such is something we tend to leave up
> to the user agents. In practice, it's often the case that user agents are
> very constrained in how they can deal with hardware limitations (e.g. if
> the user agent cannot allocate more memory, then it might not be able to
> allocate memory to fire an exception, or to keep track of the worker to
> run it later), and therefore we tend to leave that open. So long as the
> limitations are big enough that most pages don't run into them, it doesn't
> really matter -- a user agent with a compatibility issue can almost
> always just increase the limits if pages would otherwise break!

That doesn't help authors whose pages break unpredictably.  I've long
thought that HTML5 should specify hardware limitations more precisely.
 Clearly it can't cover all cases, and some sort of general escape
clause will always be needed -- but in cases where limits are likely
to be low enough that authors might run into them, the limit should
really be standardized.  Compare to UAX #9's limit of 61 for explicit
embedding depth.  Similarly, there's no reason UAs shouldn't
standardize on maximum URL length -- inconsistency there has caused
interoperability problems (mainly IE being too low).  The goal should
be the same code working the same in all browsers, without authors
having to learn how each browser behaves in corner cases like lots of
workers.

> Unfortunately we can't really require immediate failure, since there'd be
> no way to test it or to prove that it wasn't implemented -- a user agent
> could always just say "oh, it's just that we take a long time to launch
> the worker sometimes". (Performance can be another hardware limitation.)

In principle this is so, but in practice it's not.  In real life, you
can easily tell an algorithm that runs the first sixteen workers and
then stalls any further ones until one of the early ones exit, from an
algorithm that just takes a while to launch workers sometimes.  I
think it would be entirely reasonable and would help interoperability
in practice if HTML5 were to require that the UA must run all pending
workers in some manner that doesn't allow starvation, and that if it
can't do so, it must return an error rather than accepting a new
worker.  Failure to return an error should mean that the worker can be
run soon, in a predictable timeframe, not maybe at some indefinite
point in the future.
Received on Friday, 31 December 2010 10:38:57 UTC

This archive was generated by hypermail 2.3.1 : Monday, 13 April 2015 23:09:02 UTC