W3C home > Mailing lists > Public > www-style@w3.org > May 2008

Re: Suggestion for CSS3 selectors (fwd)

From: Andrew Fedoniouk <news@terrainformatica.com>
Date: Thu, 01 May 2008 20:13:36 -0700
Message-ID: <481A86E0.4060405@terrainformatica.com>
To: Andrei Polushin <polushin@gmail.com>
CC: fantasai <fantasai.lists@inkedblade.net>, "Brian J. Fink" <desertowl23@gmail.com>, dbaron@dbaron.org, www-style@w3.org

Andrei Polushin wrote:
> Andrew Fedoniouk wrote:
>> Andrei Polushin wrote:
>>> 1. In the first phase, we are noticing the child nodes that match those
>>> "retroactive" selectors. A reference to such child become recorded by
>>> its corresponding parent for the later use.
>> Let's consider selector:
>>   p.description < a:link
>> At this first stage we will build a list ( find all a:link and their 
>> p.description parents ) is a task of complexity O(n*d) where n - 
>> number of elements in the DOM, d - depth of the DOM tree.
>> Elements p.description will have a list of CSS rules references 
>> (selectors and style-defs).
> Not necessary. Elements p.description will have a list of references to 
> descendant a:link elements. The CSS rules will be found later, at the 
> second stage.

Pardon, but why do you need list of a:link elements?
You need a list of rules at the end.

>>> 2. In the second phase, we resolve the style of each node, matching 
>>> both the conventional selectors (as we did it in CSS2), and using the 
>>> previously matched (in phase 1) retroactive selectors.
>> Complexity of this stage is also close to O(n*d) so, indeed, we will 
>> have a better picture:
>>  O(2*n*d) with the price of rules list maintainance for each DOM element.
>> That is caching commonly used for O(n*n) problems solving.
>> That appears as better but still not that good. For dynamic updates 
>> (e.g. a:hover) you will need to recalculate styles not only of 
>> descendants/sibling but also for all elements in parent containers - 
>> potentially the whole tree.
> You're estimating the problem for the worst case. But this case doesn't 
> appear to be bad enough in practice. On average, the worst case is rare 
> and shouldn't appear. Even if it appears, it could be noticeable (or 
> should be made noticeable, if it's not) for the web author that the 
> computation problem occurs as the result of his/her particular authoring 
> style. Author could then avoid the computational problem by changing the 
> authoring style.

It is generally bad idea to rely on author and his/her motivation to 
notice such things. If this feature works (has any visual effect)
then author will blame vendor first if it will work slow. The problem is 
that CSS authors have little knowledge about these matters.

Say you have UA-A and UA-B. First one will have this retroactive feature
and author will use such rule. For UA-B author will create workaround.
That workaround(scripting) will work two times faster. UA-B will win.
So what is the point?

> Thus I agree that the computational problem could remain. It would be 
> better if we will have a formal research and conclusion on this, from 
> the several competent scientists.

If former Soviet rocket scientist would satisfy then you have got one 
already :)

Andrew Fedoniouk.

Received on Friday, 2 May 2008 03:14:15 UTC

This archive was generated by hypermail 2.3.1 : Monday, 2 May 2016 14:27:36 UTC