W3C home > Mailing lists > Public > www-style@w3.org > August 2014

Re: [cssom] Allowing multiple @imports of the same url to be deduped

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Fri, 15 Aug 2014 23:09:26 -0400
Message-ID: <53EECB66.3020003@mit.edu>
To: www-style@w3.org
On 8/15/14, 8:23 PM, Tab Atkins Jr. wrote:
> So, I'm trying to fix this.  I think I can address both of these
> problems by making CSSRuleList constructable, and making it sharable
> across Stylesheet objects.  When the UA is constructing them, it
> dedupes based on URL within the current top-level document, so all
> uses of a given URL share a single CSSRuleList object, though they
> generate different Stylesheet objects.

Just FYI, Gecko does something like this already, but without the compat 
issue you want to introduced.

More precisely, we have internal and DOM-facing representations of CSS 
rules.  The internal one is shared across all loads for a given document 
(keyed on URI, CORS mode, and the origin of the thing starting the load 
if I recall correctly; keying on just URI but not CORS mode is clearly 
wrong).  The DOM-facing one is lazily created as needed.  The setup is 
basically copy-on-possible-write, in that the internal representation is 
cloned lazily if you try to get the .cssRules of a sheet or add/remove 
rules to the sheet, so we can put distinct CSSRule instances in distinct 
CSSRuleLists.  This object cache does NOT pay any attention to HTTP 
headers, like the per-document image cache.

This behavior actually has a pretty serious issue in some cases: if a 
<link> is removed from the DOM, or its href is changed, the sheet it 
linked to effectively leaks for the lifetime of the document.

At the moment we're considering just expiring things from the 
per-document cache off timer to deal with this issue.  That's not 
observable in the common case of the sheet not being time-varying, due 
to the COW behavior I described, though of course observable via 
ServiceWorker or server-side logging or just having the server send 
different data each time for a URL.

Clearly a spec that requires us to hold on to this data for eternity 
would not allow us to do that; I would be opposed to such a spec.

> This has an obvious possible compat issue - if people currently import
> the same url in multiple places

Which they do; I've seen quite a number of instances of it.

> and edit instances

I expect they do and would be vastly surprised if not.

> expecting them to be separate

This is the big question.

> I suspect that's super-rare?

Can we quantify that before making web compat breaking changes here? 
Especially since the COW approach can give us the same benefits in 
practice without the compat breakage.

-Boris
Received on Saturday, 16 August 2014 03:09:56 UTC

This archive was generated by hypermail 2.3.1 : Saturday, 16 August 2014 03:09:57 UTC