- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Mon, 15 Jul 2013 21:33:08 -0400
- To: Bruno Racineux <bruno@hexanet.net>
- Cc: WHATWG List <whatwg@whatwg.org>, Yoav Weiss <yoav@yoav.ws>
On 7/15/13 7:28 PM, Bruno Racineux wrote: > The outline there suggest: "- When compiling a lazy script with no inner > functions, do a table lookup for a script with the same source location > (filename, lineno, column, source begin/end" So just to be clear: that bug is talking about "script" in the sense that SpiderMonkey thinks of it, which again is either a toplevel script or a function body. The above lookup is basically an optimization: once the lookup is done the result of the lookup is a pair: script chars and compiled code. The script chars are then compared to the chars of the script that needs to be compiled, and if those match the compiled code is used. The reason for factoring in lineno and column is to minimize the number of chars compares that need to be done (recall that "script" here is a function body in many cases, and there are lots of those on a single line of a single file in a typical minified script). > I am no going that far with the overhead. My suggestion is to be only > interested in the unique URL hash (not line no, column, or source > begin/end). Just how many times that script has been accessed in the last > day, week or month. Sure, but then what gets done with that information? Current UAs for the most part do the following, for a whole script: 1) Do a fast tokenization pass to catch syntax errors and determine function boundaries. 2) Compile and run the toplevel parts of the script. Functions are then compiled the first time they're actually called. Turns out your typical web page includes lots of functions (mostly in libraries) that it never calls. I assume the proposal is to cache, for a given URL, not just the string for the script but also the results of the initial syntax-check pass and whatever things we compiled, right? That might be worth it, as I said, but needs careful measurement: to the extent that the storage is slow and the CPU is fast and the compiler is fast and simple (which the first-pass compiler typically tries to be) recompiling may be faster than deserializing a compiled representation. > Or the "fundamental unit of compilation" for JS It's only the fundamental unit because of the above observation that many functions never get called. -Boris
Received on Tuesday, 16 July 2013 01:33:40 UTC