W3C home > Mailing lists > Public > whatwg@whatwg.org > March 2009

[whatwg] Script loading and execution order for importScripts

From: Jonas Sicking <jonas@sicking.cc>
Date: Sat, 7 Mar 2009 20:58:31 -0800
Message-ID: <63df84f0903072058kca9ea4dw8159f985607978dd@mail.gmail.com>
On Sat, Mar 7, 2009 at 1:40 AM, Oliver Hunt <oliver at apple.com> wrote:
>
> On Mar 7, 2009, at 1:20 AM, ben turner wrote:
>
>> On Fri, Mar 6, 2009 at 8:40 PM, Oliver Hunt <oliver at apple.com> wrote:
>>>
>>> In all honesty i'm not sure which is the better approach as the spec
>>> approach requires developers to manually handle the potential for partial
>>> library execution, but the Mozilla approach removes the ability to load
>>> and
>>> execute scripts in parallel, which may cause latency problems.
>>
>> You are half-correct :)
>>
>> Currently we load all scripts in parallel and then _compile_ each
>> script as soon as it has finished loading (which can be in any order).
>> We do not _execute_ them, however, until all loading and compilation
>> have completed successfully, and then we execute them in the order of
>> the arguments passed to importScripts.
>>
>> You're right that this is different from the behavior described in the
>> spec... I was supposed to mail this list a while ago and completely
>> forgot, many apologies. We felt that our approach was a good
>> compromise between executing only some of the scripts and executing
>> each script as soon as possible. We are certainly open to any better
>> alternatives. Do other JS engines have support for separating the
>> compilation and the execution of scripts?
>
> If by "compilation" you mean you're (effectively) just doing a syntax check
> then webkit is able to this, although it has a reasonable cost associated
> with it, but then i have a vague hope that the work being offloaded onto
> workers is more substantial than the work the engine is putting into
> parsing.
>
> So in effect, there are 3 steps that mozilla currently takes:
> 1. for (url in arguments)
> ? ? ? ?absoluteUrl = resolve(url);
> ? ? ? ?if (!isValid(absoluteUrl)) throw SYNTAX_ERR
> ? ? ? ?absoluteUrls.push(absoluteUrl);
> 2. for (url in absoluteUrls)
> ? ? ? ?script = loadScript(url)
> ? ? ? ?if (!isValidSyntax(script))
> ? ? ? ? ? throw SYNTAX_ERR;
> ? ? ? ?scripts.push(script)
> 3. for (script in scripts)
> ? ? ? ?execute(script)
>
> While the loads in (2) can be done in parallel, i do not believe that the
> syntax checking can occur in parallel as the syntax error that can be thrown
> represents an observable side effect, and thus load completion order could
> result in different exceptions being thrown (syntax error on file2.js vs.
> file1.js)
>
> In general i think i prefer this model, despite the fact that we end up not
> being able to execute js while waiting for later scripts to load, it's seems
> much more reasonable to give not have side effects if any of the resources
> has a basic non-execution dependent fault.

Why do you think it's important not to have side effects for syntax
errors but don't think it's important to not have side effects for
run-time errors? Given that we simply can't fix the latter, I don't
see any advantage to users to attempt to fix the former.

I really don't think optimizing for the case when something has gone
wrong is the way to go. That is an extremely rare case in a deployed
application, and so optimizing for performance feels much more
important to users.

Also considering how applications are likely to handle these errors,
I.e. full abort and tell user that an unrecoverable error has
occurred, it doesn't really matter if there have been side effects or
not.

/ Jonas
Received on Saturday, 7 March 2009 20:58:31 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 30 January 2013 18:47:49 GMT