Re: Executing script-inserted external scripts in insertion order

>> For those of us who aren't familiar with the products mentioned so far
>> on this thread, could someone explain why it is interesting to be able
>> to delay the execution of a batch of scripts until after all those
>> scripts have been downloaded?

That's not exactly the use case being discussed, but I understand how that 
might seem like the logical conclusion from the discussion. I'm not actually 
trying to delay a *batch* of scripts, but rather being able to say 
explicitly "script A and B need to download in parallel, but A *must* 
execute before *B* because there is an execution dependency of B on A."

Right now, in all modern browser versions, if you specify script A and B in 
manual script tags in the markup, this is exactly what will happen... A and 
B will download in parallel, but they will execute in HTML markup order.

But if you use on-demand/lazy-loading techniques, or more to the point, if 
you use a script-loader like LABjs, RequireJS, or various others, which 
solve the script loading not with HTML inserted script tags but injected 
script nodes, you notice right away that there's no way to enforce the 
order, even if you need the order.

So think about it from this perspective: you have an existing page that 
works fine with manual HTML script tags, and then (for one of various 
reasons) you want to use a dynamic script loading technique to replace those 
script tags, all of a sudden, now it's not possible to ensure those two 
execute in the proper order.

It's important to note (which I'll expound on in a moment) that this use 
case relates to at least one of the two scripts being on a remote domain 
(like a CDN), which means that XHR cannot be used for the download in 
parallel, execute in order.

>> It seems to me like it would be simpler to either have all the scripts
>> download and execute as they become available, with any dependencies
>> resolved by having the scripts register to be notified when their
>> dependencies become available (this tends to be the kind of approach I
>> usually use), or to have all the scripts concatenated into one JS file
>> that is downloaded and executed in one go (the latter approach is what
>> Google tends to use on its sites — it has the side-benefit of reducing
>> the number of HTTP connections).

There's no question that reducing the number of HTTP requests, combining 
files, or programming your code's API's to be friendly to progressive 
definition are all great techniques. But there's *lots* of use cases of 
existing script content on the web where none of those techniques are 
possible.

For instance, consider this very common use case:

I need to load jQuery, jQuery-UI, and several plugins into my page, in 
addition to a little bit of initialization code for my page. To be smart, I 
combine all those plugins together along with my initialization script. But 
I want to load jQuery and jQuery-UI from the CDN location (to take advantage 
of shared caching, etc). So I have a total of 3 scripts -- 2 remote, and 1 
local. Probably a pretty decent (performance-wise) balance of how to load 
all this code.


So, what are my options?

1. Use only manual script tags. The browsers will download these all in 
parallel, but keep the execution order.

But then, my on-demand/dynamic loading use case won't work. For instance, 
even if those script tags show up at the bottom of the <body> (typical 
advice), this is not ideal performance for loading, because the loading of 
all those scripts will still block the page's dom-ready *and* onload events, 
meaning the page appears to still be unusable to a user (even if images/CSS 
are rendered) until the scripts finish loading. That is specifically why 
doing dynamic script loading (like with LABjs) helps the page, because it 
allows the dom-ready and onload to happen after all rendered content is 
loaded, while letting the scripts keep loading/executing in the background. 
YES, this can create a lag (what I call "FUBC" 
flash-of-unbehaviored-content) between the content displaying and the 
content being activated/behaviored by the scripts. But if a site is UX 
savvy, they will figure out how to progressively enhance the page in this 
respect in a way that is pleasing and not jarring to the user.


2. OK, so now as a performance person, I decide that dynamic script loading 
is the way to go. So what's my next option? Because I read lots of great 
content on reducing HTTP requests improving performance, and because all 
browsers don't have a straightforward way to maintain the script order of 
multiple scripts, I think: "I'll just download jQuery and jQuery-UI scripts 
and self-host them, and combine them into my local file". This may seem like 
a decent option, but consider the possible negative performance behavior 
now:
   a) we lose the shared caching effect that CDN's were designed to exploit. 
For jQuery and jQuery-UI, this could be a potentially huge amount of 
performance being lost.
   b) now 100's of k of JavaScript is being downloaded entirely serially, in 
one script, whereas before the browser was able to download jQuery, 
jQuery-UI, and my local script in parallel. I've done a bunch of different 
performance benchmark testing that suggests that the parallel loading 
benefit of 2 or 3 files versus one big file (especially in the sizes we're 
discussing) is non-trivial improvement. So, if I concat all into one file, I 
now potentially lose that performance benefit.
   c) "long-term" caching behavior: my page code is definitely much more 
volatile (subject to regular change/tweak) than is jQuery/jQuery-UI. That 
means that every time I change my code, all my users have to completely 
re-download the 100's of k of jQuery/jQuery-UI which didn't change, because 
those bytes were not separately cacheable. In other words, volatile code 
should in general not be combined with stable code as separate 
cacheability/long-term caching behavior are concerned.


3. So now I realize I need a smart script loader to handle these details for 
me. So, I can use LABjs or RequireJS or any of several other loaders. How 
can those script loaders do this? They have two options:
   a) load each script and execute, serially. This has terrible performance 
implications, regressing to the way things were before the modern browsers 
started downloading script tags in parallel.
   b) figure out how to load in parallel, but execute serially (if the need 
is expressed). This is what LABjs figured out how to do, and what 
RequireJS+order plugin does. It's the ideal balance of script loading 
performance with flexibility of where files are hosted, caching behavior, 
etc.


So, from a performance standpoint, as existing (and very widespread) web 
content (like jQuery/jQuery-UI) is concerned, 3b is probably the most ideal. 
And LABjs/RequireJS were able to achieve this, until the FF change (as a 
result of spec conformance) happened a week ago. Hence, this whole 
discussion has cropped up to try to figure out how to resolve the 
conflicting behavioral change.


--Kyle







>
> My apologies for this rather basic question; I'm rather naïve in these 
> matters.
>
> -- 
> Ian Hickson
> 

Received on Tuesday, 12 October 2010 13:19:31 UTC