W3C home > Mailing lists > Public > public-web-perf@w3.org > July 2011

RE: [Efficient Script Yielding] - Clamping

From: Jason Weber <jweber@microsoft.com>
Date: Wed, 6 Jul 2011 21:01:44 +0000
To: James Robinson <jamesr@google.com>
CC: "public-web-perf@w3.org" <public-web-perf@w3.org>, Jatinder Mann <jmann@microsoft.com>
Message-ID: <8442F4DCA0FE304198740526F60E8D8B07065222@TK5EX14MBXC243.redmond.corp.microsoft.com>
There are many many ways for a website to peg the CPU/GPU today. The Chrome Experiments and the IE TestDrives are great examples of how web developers can stress what's possible - whether intentional, unintentional, or machine dependent. What makes the setImmediate API different and subject to clamping? Would you recommend browsers attempt to clamp every web pattern that can peg the CPU?

Also, you mentioned that you disagree with my historical account. What pieces are technically inaccurate? These are hard engineering numbers so there shouldn't be room for interpretation. I want to ensure I'm not missing something.

Thanks, Jason



From: James Robinson [mailto:jamesr@google.com]
Sent: Wednesday, July 06, 2011 1:06 PM
To: Jason Weber
Cc: public-web-perf@w3.org; Jatinder Mann
Subject: Re: [Efficient Script Yielding] - Clamping


On Fri, Jul 1, 2011 at 4:41 PM, Jason Weber <jweber@microsoft.com<mailto:jweber@microsoft.com>> wrote:
I would argue the two decades of implicit clamping is the problem and not buggy javascript.

Let's take a specific scenario - using setTimeout(0) for frame based animation in 2004.

When the web developer wrote this code they were likely using Firefox or IE6 on WinXP or early version of Safari on Mac OSX. In all of these environments setTimeout(0) was implicitly being clamped by the operating system. When they tested their animation it produced the visual results they expected. The human eye can't distinguish between 60 and 64 callbacks a second which is what they would have received on Windows. When they tested on the Mac they may have received ~100 callbacks a second. The difference was not significant and if anything they likely would have just assumed the Mac was faster than Windows without realizing there was a bug in their code. If setTimeout wasn't implicitly clamped their animation wouldn't have worked and they would have found/fixed the bug at that point in time.

There will always be buggy code in the world and people will always abuse API's including setImmediate. I have faith web developers will be able to use the new API appropriately.


I disagree with both your historical assessment and with this statement.  It's important to  consider the dynamics here - it only takes one popular website spinning CPU with setImmediate() to force a clamp.

Recently, we encountered a page in Chrome that had an indefinitely spinning timer.  The root cause was a popular library that was polling with a zero-delay timer for a piece of the page to load and storing the timer ID in a global.  When the load condition was met, the timer was cancelled . A popular newspaper website managed to include multiple copies of this library from different components on the page, causing the timer ID to get clobbered and all of the timers except the last set one to get leaked.

The library author is definitely at fault for polling and for storing the timer ID in a global, but they obviously didn't expect the script to be included multiple times in the same context and normally the polling would be very short-lived.  The page author is also definitely at fault for including the script multiple times, but since the users were from different components of the page assembled by their CMS it is quite likely that the authors of the separate components never encountered this issue while working on their piece.

The options for a browser vendor on a page like this are:
1.) Honor the specified timeout and let the page burn 100% CPU.  Unacceptable for users.
2.) Convince the page author to either upgrade to a newer version of the library or modify their CMS to to avoid multiple possibly-renamed copies of the library from being included on the page.  Both options are impractical.
3.) Apply a clamp.

I can guarantee you that if setImmediate() is implemented as specified we'll run in to a popular website with the same behavior sooner rather than later and be faced with the same options.  It's very easy to leak a timeout and there are lots and lots of authors of various levels of competence.

- James



From: James Robinson [mailto:jamesr@google.com<mailto:jamesr@google.com>]
Sent: Friday, July 01, 2011 3:54 PM

To: Jason Weber
Cc: public-web-perf@w3.org<mailto:public-web-perf@w3.org>; Jatinder Mann
Subject: Re: [Efficient Script Yielding] - Clamping

On Fri, Jul 1, 2011 at 3:27 PM, Jason Weber <jweber@microsoft.com<mailto:jweber@microsoft.com>> wrote:

Of course, we can't get all authors to write ideal javascript code.  After all, we only had to add a clamp to setTimeout() and setInterval() because people were creating tight loops with timeouts and using 100% of available CPU (see https://bugzilla.mozilla.org/show_bug.cgi?id=123273, for example).  This new proposal provides another way for bad authors to recreate the problems that lead to the clamp being necessary for setTimeout()/setInterval() but it doesn't seem to allow any new use cases that a good author could achieve today.


I agree that in an ideal world we wouldn't need to clamp setTimeout(0) to 4ms and could use that address this pattern rather than needing to add a new API. Between 1990 and 2009 setTimeout was implicitly clamped (by either the browser or the operating system) to at least 10ms on Mac OS X and Windows. When a developer wrote, debugged, and tested code on their machine using setTimeout(0) they would receive at most 100 callbacks a second.

This lead to a lot of incorrect assumptions. For example, on Windows setTimeout(0) would result in 64 callbacks a second which is ~60fps. That meant they could successfully use setTimeout(0) for script based animations which is of course a bug in the script but worked because of the underlying clamps. The previous clamps helped developers write a lot of buggy code that essentially ran at the refresh rate.

It's challenging to change the implicit rules around a core API like setTimeout after 20 years with so much buggy code out there. That's essentially what we're doing by reducing the clamps to 4ms. If the legacy throttles hadn't existed I doubt that we would need to clamp to setTimeout(0) which would make this discussion moot.

We don't believe that setImmediate should be clamped. Use agents need to remain responsive and prioritize work accordingly. We shouldn't allow DOS scenarios and should be responsible citizens on the operating system. However, if a developer would like to use the entire core of a machine for computation on the UI thread and that webpage is in the foreground (implying user engagement) that should be allowed.


What makes you think that the authors who wrote buggy code with setTimeout(0) loops will not write the same buggy code with setImmediate(), thus forcing user agents to re-introduce a clamp?

- James
Received on Wednesday, 6 July 2011 21:02:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:04:31 UTC