W3C home > Mailing lists > Public > public-html@w3.org > December 2010

Re: Need to: "preload" CSS and JS

From: Kyle Simpson <getify@gmail.com>
Date: Sun, 19 Dec 2010 23:08:18 -0600
Message-ID: <47405CD450E84B46BEB9DA09CC79FA26@Frodo>
To: "public html" <public-html@w3.org>
> So why don't you just write all the extra code as a bunch of functions
> that don't do anything until called?  Then put the script at the end
> of the body.  Its download won't block anything earlier in the page
> from working, and it will do nothing when executed, so it just has to
> be parsed.  What problems does this have?

The question is not whether or not such a suggested pattern is right or 
wrong for any given JavaScript tool/framework/widget/plugin. The question 
is, for the existing content (of which there is quite a bit) that affects 
the page automatically, upon resource load, is there any way (without 
altering existing code -- often not an option) that this script resource can 
be loaded and made ready for use, but not actually run until a later time, 
when the author (or user) determines it's appropriate?

It would be pointless to debate here whether such patterns are useful or 
not. They exist, in great supply, and resource loaders are attempting to 
improve the performance of them without necessarily fighting the battle over 
API/usage patterns or telling existing content that it must change.

> Alternatively, are <script async> and/or <script defer> useful here,
> in browsers that support them?  (Remember that if some browsers don't
> yet support existing specced features, that doesn't affect our
> discussion here.  If the problem is "browsers don't support existing
> specs", the solution is not "let's make more specs".)

Neither `defer` nor `async` are currently spec'd/defined for dynamic 
script-inserted elements. Of course, my other previous *really long* thread 
here on this list asking for `async=false` behavior on dynamic script 
elements is an attempt to partially do so, but it's not been formally 
proposed/accepted yet (still in progress).

It's extremely limiting as currently spec'd because you can only access such 
behavior via markup. There are many many use-cases where content must be 
dynamically injected into a page (third-party widgets, ads, etc) and so the 
markup approach is basically unhelpful.

Moreover, `defer` is the only one that really comes close to what's being 
asked for in the use-case. But, it falls well short of what Steve's ideas 
call for. Steve calls for the author to be able to decide on-demand (either 
by a timing decision, or a user event like activating a part of the page, or 
...) when a preloaded resource should be parsed/executed. `defer` forces the 
parse/execute to happen directly after `window.onload`, so it gives far less 
control than what Steve is proposing.

> So even parsing the script is too expensive for you?

YES (especially so on mobile devices)! The Gmail mobile team even went so 
far as to put in their mobile web code the concept of taking large chunks of 
their code, inlining them in the HTML document in inline script documents, 
but wrapping the code in /* ... */ comment block to prevent the browser from 
parsing the code. Then, when they wanted the code to execute later, they 
grabbed the source code (`text` property), removed the comment delimiters, 
and eval'd or re-injected the code to execute it. They noticed a huge 
increase in page-load speed with such techniques.

It is the logical continuation of such behavior that Steve is trying to 
incorporate into his ControlJS loader. He's suggesting that the parsing of a 
large script not only causes a noticeable delay in terms of the user 
interaction or page-rendering (by occupying the browser/cpu), but also 
delays other (perhaps more critical) UX JavaScript logic (like animations, 
event handlers on user events, etc). That's why it's so important to have 
more control over when that costly delay may happen, to minimize its 
potential side-effects.

> How about you
> fetch it by AJAX and then eval() it when you need it?  I recall
> reading that some Google sites were doing this.

Ajax/eval works (although with some drawbacks)... except when you need to 
load such resources from a remote domain. Of course, the proliferation of 
CDN usage is starting to make same-domain Ajax for such cases a lot less 
effective.

Yes, we have CORS for cross-domain Ajax. But a *lot* of servers (and CDN's) 
are not yet supporting such access. The server has a non-trivial amount of 
extra logic that it has to implement to handle the proper preflight headers, 
etc. Moreover, CORS is not fully cross-browser yet (not yet in Opera). While 
this isn't a *spec issue per se*, it directly affects what resource loaders 
can and cannot rely on. Plus, we still have the IE6-7 case to consider.

Until such a time as CORS (and servers that support it) is fully ubiquitous, 
CORS is not a reliable option for load now, execute later. XHR can be *part* 
of a loader's overall strategy (as it is, in my LABjs loader), but it's not 
a comprehensive solution.

>> Because its definition is more of a hint than a proactive request, it's
>> unclear/unreliable whether a script loader could dynamically inject such 
>> a
>> <link> element with `rel=prefetch` and force the browser to download that
>> resource right then (or soon thereafter).
>
> Perhaps the answer is to just get browsers to implement rel=prefetch
> properly, then.

I'm not sure exactly what you mean by "properly". Unfortunately, there's 
still a fair amount of ambiguity in how it's defined, and judging by the 
other thread, this may be intentional so as to give browsers more freedom to 
decide.

My concern with <link rel=prefetch> is not that some browser has implemented 
it wrong. My concern is that the spec around it, taken as I interpret "hint" 
and "idle time", makes it wholly unsuitable for a resource loader to use 
reliably and proactively.

> It sounds like what you'd like is a mechanism to inform the browser
> that you want a particular resource to be loaded and cached for the
> lifetime of the current page, even if its caching headers don't allow
> this.  (This isn't a violation of the spirit of the caching headers,
> if you consider that browsers normally use resources for as long as
> the current page is open even if they're uncacheable.)  That sounds
> like a workable idea -- perhaps we could adapt <link rel=prefetch>, or
> perhaps make up a new JavaScript function.

This is an interesting idea that had not occurred to me. It's not really at 
all what I was asking for, and I think it's suboptimal for the use-case in 
question. But I suppose it's a valid approach nonetheless.

I say "suboptimal" because I consider the "cache-preload" tricks to be 
inherently less efficient. Even if we did as you said, and eliminated the 
problem of the 51% of resources without sufficient caching headers by 
substituting page-lifetime caching headers under certain (or all) loading 
cases, we'd still have the fact that there's a non-trivial delay to pull an 
item from cache (ie, disk) as opposed to immediately parsing/executing a 
resource that's been proactively preloaded in memory but had its execution 
deferred.

Some time ago, I benchmarked (informally) the cache-load penalty by 
comparing a script that was loaded via XHR and injected versus a script that 
was loaded into cache and then re-requested. The XHR outperformed the 
load-from-cache. I can't recall exactly the figures, but I think for a 
medium-sized script (~75k) it was about 5% slower or something of that 
order.


--Kyle 
Received on Monday, 20 December 2010 05:08:55 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 29 October 2015 10:16:07 UTC