W3C home > Mailing lists > Public > www-style@w3.org > November 2005

Re: Computational complexity of CSS

From: Orion Adrian <orion.adrian@gmail.com>
Date: Wed, 16 Nov 2005 19:44:44 -0500
Message-ID: <abd6c8010511161644x6bf3a1e9n8c661e63305f2593@mail.gmail.com>
To: www-style@w3.org

On 11/16/05, Andrew Fedoniouk <news@terrainformatica.com> wrote:
>
> Thanks, David,
>
> Blog article is perfect,
>
> ----- Original Message -----
> From: "David Hyatt" <hyatt@apple.com>
> |
> | Andrew,
> |
> | Your O(N*M) assertion bears little resemblance to reality.  In
> | practice sibling styles (and even cousin styles) can be shared,
> | avoiding even the matching or resolution phases.  For typical Web
> | pages, this sibling/cousin sharing optimization results in having to
> | match styles for only about 30% of the elements in a document.
>
> Yes, of course, caching is a common practice of reducing O(N*N) problems
> but here (in your blog) is a good point:
>
> "(10) There must be no sibling selectors in use at all. WebCore simply throws a
> global switch when any sibling selector is encountered and disables style
> sharing for the entire document when they are present. This includes the +
> selector and selectors like :first-child and :last-child."
>
> So implementation of  e.g. :nth-child(...)  is ruining caching (sharing in your
> case)
> and this is what bothering me now.
> So if somebody will decide to use nth-child(3n+1) then it will produce O(N*M*D)
> problem. And author of the document will have no idea what happened .
>
> |
> | Moreover, each element need not examine every rule.  Rules can be
> | hashed based off what's in their rightmost simple selector sequence.
> | Again for typical pages, a given element need only examine a very
> | small subset of the actual rules.
>
> Again this is also that sacred knowledge which needs to be published -
> rightmost element - as fully it is defined as better. Sometimes and
> on some sets results can vary in the order of magnitude.
>
> |
> | With these optimizations CSS scales up quite well, even to a large
> | number of rules and elements.
>
> Yes, quite well but not always. The point is that only the person
> who knows internals of particular engine can say is this good
> system of styles or not. And for some particular DOM.
>
> So formally speaking it has computation complexity of O(N*M*D).
> Optimizations can handle something but worst case is like that.
>
> |
> | See my blog entry I wrote on this subject a while back:
> |
> | http://weblogs.mozillazine.org/hyatt/archives/2005_05.html#007507
>
> Special thanks for that, seems like we are on the same track

Given that many browsers over many generations may view a page, we
really should design CSS for compatability and maintainability and not
a snapshot of performance statistics across the two or three most
popular browsers during a year.

--

Orion Adrian
Received on Thursday, 17 November 2005 00:44:53 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 27 April 2009 13:54:41 GMT