RE: [Efficient Script Yielding] - Clamping

I would argue the two decades of implicit clamping is the problem and not buggy javascript.

Let's take a specific scenario - using setTimeout(0) for frame based animation in 2004.

When the web developer wrote this code they were likely using Firefox or IE6 on WinXP or early version of Safari on Mac OSX. In all of these environments setTimeout(0) was implicitly being clamped by the operating system. When they tested their animation it produced the visual results they expected. The human eye can't distinguish between 60 and 64 callbacks a second which is what they would have received on Windows. When they tested on the Mac they may have received ~100 callbacks a second. The difference was not significant and if anything they likely would have just assumed the Mac was faster than Windows without realizing there was a bug in their code. If setTimeout wasn't implicitly clamped their animation wouldn't have worked and they would have found/fixed the bug at that point in time.

There will always be buggy code in the world and people will always abuse API's including setImmediate. I have faith web developers will be able to use the new API appropriately.




From: James Robinson [mailto:jamesr@google.com]
Sent: Friday, July 01, 2011 3:54 PM
To: Jason Weber
Cc: public-web-perf@w3.org; Jatinder Mann
Subject: Re: [Efficient Script Yielding] - Clamping

On Fri, Jul 1, 2011 at 3:27 PM, Jason Weber <jweber@microsoft.com<mailto:jweber@microsoft.com>> wrote:

Of course, we can't get all authors to write ideal javascript code.  After all, we only had to add a clamp to setTimeout() and setInterval() because people were creating tight loops with timeouts and using 100% of available CPU (see https://bugzilla.mozilla.org/show_bug.cgi?id=123273, for example).  This new proposal provides another way for bad authors to recreate the problems that lead to the clamp being necessary for setTimeout()/setInterval() but it doesn't seem to allow any new use cases that a good author could achieve today.


I agree that in an ideal world we wouldn't need to clamp setTimeout(0) to 4ms and could use that address this pattern rather than needing to add a new API. Between 1990 and 2009 setTimeout was implicitly clamped (by either the browser or the operating system) to at least 10ms on Mac OS X and Windows. When a developer wrote, debugged, and tested code on their machine using setTimeout(0) they would receive at most 100 callbacks a second.

This lead to a lot of incorrect assumptions. For example, on Windows setTimeout(0) would result in 64 callbacks a second which is ~60fps. That meant they could successfully use setTimeout(0) for script based animations which is of course a bug in the script but worked because of the underlying clamps. The previous clamps helped developers write a lot of buggy code that essentially ran at the refresh rate.

It's challenging to change the implicit rules around a core API like setTimeout after 20 years with so much buggy code out there. That's essentially what we're doing by reducing the clamps to 4ms. If the legacy throttles hadn't existed I doubt that we would need to clamp to setTimeout(0) which would make this discussion moot.

We don't believe that setImmediate should be clamped. Use agents need to remain responsive and prioritize work accordingly. We shouldn't allow DOS scenarios and should be responsible citizens on the operating system. However, if a developer would like to use the entire core of a machine for computation on the UI thread and that webpage is in the foreground (implying user engagement) that should be allowed.


What makes you think that the authors who wrote buggy code with setTimeout(0) loops will not write the same buggy code with setImmediate(), thus forcing user agents to re-introduce a clamp?

- James

Received on Friday, 1 July 2011 23:41:31 UTC