W3C home > Mailing lists > Public > public-html@w3.org > December 2010

Need to: "preload" CSS and JS

From: Getify <getify@gmail.com>
Date: Sun, 19 Dec 2010 08:22:25 -0600
Message-ID: <A5B6EF7E71EB459DAE23B67479937773@spartacus>
To: "public html" <public-html@w3.org>
?[[[This thread is being split off from this previous thread:


...to separately address this question:]]]

>> I ask because many different "loaders" (both CSS and JavaScript) attempt to
>> use <object> and `new Image()` to "preload" (that is, load into cache, but
>> not parse/execute) such content.
>> Is this a "safe" hack, or is it likely that such "preloading" will
>> eventually (or is intended to be) prevented?
> It sounds like a dedicated feature would be the desired way to solve
> this.  I'm not sure offhand if one exists.  What's the reason for
> using tricks to preload CSS or JS instead of just going ahead and
> loading it normally?  Is it because of some inherent limitation, or
> just because of poor existing implementations?  Recent browser
> versions have gotten much smarter about fetching things in parallel
> and so on -- is this stuff still needed for the latest browsers?

There's a variety of different use-cases and examples that I can cite where people have found that "preloading" is a useful feature to have. I'll refrain from going into too much depth initially in this response (as it could be a really long response!), but as it becomes necessary in the thread to explore such things in more detail, just know there is indeed plenty of performance-optimization thinking behind such feature requirements.

The primary idea behind such "preloading" is to be able to load one or more resources in parallel, but *not* have the browser spend precious CPU time parsing or executing them at the time they finish loading. The mobile browser/device is the most obvious reason such a technique is helpful for performance optimization. A site which is heavy on CSS and/or JS may want to "pipeline" all its resources into the page (in expectation of being used "later"), but not bog down the device/browser by parsing some or all of those resources all at once.

For instance, a site may have some "bootstrapper" JS code that serves the critical baseline behavior on the mobile page, but it may have one or more additional layers of complex functionality that it wants to layer in a little bit at a time. If it were to wait to *load* those resources, a significant UX "lag" may appear to the user when they click something and the page must wait for the script to load.

That "lag" could be reduced if, during some of the time the user was reading content and not yet interacting with it, the page could have been "preloading" (but not executing) the JS/CSS for that feature. As I said, the JS/CSS may have some noticeable impact on the UX of the initial page load if it were allowed to be parsed/executed during the critical first few seconds of page-load.

This is most obvious in the mobile use-case. But some optimization experts are starting to assert that similar ideas are important even for more full-powered desktop browser experiences. For instance, see Steve Souders' most recent series of blog posts, introducing his "ControlJS" loader:


One of  the primary features of this loader is that it can load one or more JS resources in parallel, but not force them all to be executed right away. The page author can choose to not parse/execute some or all of that JS (on a per-resource basis) until a later time in the page. He doesn't want to wait to *load* those resources (so as to minimize the UX delay when they are needed) -- he just wants to separate the *load* from the *execute* and be in control of both phases.

Steve is suggesting that for most sites, and for most browsers (mobile and desktop), JS parsing/execution creates a noticeable slowdown in the page's rendering of content during initial page-load. And so he wants to push off execution of that code until a later time, even to the point where code might not execute until a user clicks some menu item, etc.

<link rel=prefetch> has been thought to perhaps solve this use case. However, the definition of it is not quite what we need. The spec suggests that it's a "hint" to the browser on things that will be needed "later", and that the browser should fetch them silently in the background at some point, during browser idle time. 

While this type of definition may very well be valid for "preloading" in the original sense of the word (such as loading something on page A that will be used on page B), performance optimization folks are seeing the need for a mechanism that is more proactive so that you can force the preload of a resource for on-demand use "later" in the lifetime of page A.

Because its definition is more of a hint than a proactive request, it's unclear/unreliable whether a script loader could dynamically inject such a <link> element with `rel=prefetch` and force the browser to download that resource right then (or soon thereafter).

Also, several of the browsers do not implement the `load` event for <link> elements, so a script or css loader cannot determine that the resource has finished loading. It's critical to know if such a resource has finished its "preload", because if a user tries to interact with an element whose behavior code was being preloaded and not yet finished loading, the page needs to display some sort of "wait indicator" to a user, and then hide that indicator once the load does finish and the feature is ready to execute.

IE has, for a long time, supported a "preloading" behavior on the <script> element itself. IE will begin downloading a resource once a dynamic script element's `src` attribute is set, but it will not execute that script (even once it finishes downloading) until after the script element has been directly appended to the DOM. In that way, a script can be "preloaded" by creating a script element but not appending it to the DOM, and then it can later be executed on-demand by appending it to the DOM.

The `readyState` technique would probably be useful for this overall use-case (even for CSS if extended to the <link> tag as well), if it were standardized and available in all the browsers. However, `readyState` only works in IE and is thus not currently useful to script/css loaders. 

Thus, we get to the point where many loaders are using hacks like <object> and `new Image()` (because neither works reliably across browsers) to do the "preloading" of such resources.

The unfortunate (and in my mind, fatal) part of the <object>/Image techniques is that these hacks rely on the fact that the resource is properly cached as a result of the "preload", and that a *subsequent request* to add the resource in the proper way (via a <script> or <link rel=stylesheet> element injected) can just pull the resource from the cache nearly immediately and execute it. For the sake of this discussion, I'll label these techniques (hacks) as "cache-preload".

Page authors must sometimes use resources from locations they do not control (or are not even aware of how to control), and a lot of CSS/JS is still served without proper caching/expiration headers. Steve said his survey of sites on the web reveals that 38% of resources have no expiration, and 51% of resources have an expiration at or less than 20 seconds.

That means if such a resource is attempted to be loaded with "cache-preload", it will likely result in a costly double-load of that resource, since it wasn't properly cached the first time. As such, there's a significant chunk of the web's resources that cannot properly be "cache-preloaded".

Hopefully, this helps shed some light on the reasoning behind providing a proactive mechanism for "preloading" with on-demand parsing/execution as a separately available step. To the extent that we don't have such a facility proper in the current spec and browser states, the other previous thread about resource loading/caching is still important to consider and discuss.

Received on Sunday, 19 December 2010 14:23:02 UTC

This archive was generated by hypermail 2.4.0 : Saturday, 9 October 2021 18:45:28 UTC