- From: <bugzilla@jessica.w3.org>
- Date: Fri, 23 May 2014 20:27:51 +0000
- To: public-webapps-bugzilla@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25818 Ian 'Hixie' Hickson <ian@hixie.ch> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ian@hixie.ch --- Comment #4 from Ian 'Hixie' Hickson <ian@hixie.ch> --- I think it'd be a lot less confusing to dedupe based only on the URL as it is found in the markup. That way we can have deduped before we start fetching anything, and so we can even represent the final resource dependency tree in the DOM somehow. If we wait until we've actually fetched stuff over the network to work out what the dependencies are, we're going to have all kinds of weird effects. This is particularly relevant, for example, when it comes to dependencies outside of imports. For example, if we add dependencies to <script> with deduping, consider something like the following: <script ... src="a.js"></script> <script ... src="b.js" needs="a.js"></script> <script ... src="c.js" needs="b.js"></script> <script ... src="d.js"></script> <script ... src="e.js" needs="d.js"></script> (Assume the stuff in "..." is something that makes this dedupe and delay loads until the resources are needed.) It looks simple: when you need c.js, you can immediately set of setches for a.js and b.js. But what if b.js actually resolves to e.js? Now what do we do? Do we now fetch d.js and throw away a.js? Do we still run a.js? What if later we try to run the e.js script? Is it deduped to the b.js script? Does d.js ever run? -- You are receiving this mail because: You are the QA Contact for the bug.
Received on Friday, 23 May 2014 20:27:53 UTC