Re: Proposal: parent selectors

On 1/21/10 1:37 PM, Eduard Pascual wrote:
> On the extreme cases of mutations every 10ms

It's not the extreme case... it's the common "animate stuff with js" case.

> I would expect the authors to know what they are doing

They typically don't, actually.  Unfortunate, but true.

> It's a matter of common sense to test an
> application during development. If a developer finds that an
> application is going painfully slow, the usual thing to do is to
> remove or polish the parts of it that cause the lag.

Sort of.  The usual thing to do is to complain about browsers being 
slow.  Note, in particular, that this gives a leg up to browsers that 
happen to not implement certain features, or not handle dynamic changes 
properly because doing so would be "too slow".

This is not a theoretical problem, by the way.

> In addition, on the context of dynamic applications, a parent selector
> wouldn't be as needed as on scriptless context: it may always be
> emulated by putting a class or event handler on the "parent" element,
> and doing some tinkering through scripts.

While true, in practice I fully expect it would be used.

> For the case of thousands (or more) of updates at once, that's IMO a
> matter of UA- and application- specific optimizations.

In other words, "not my problem, your problem".  Thanks!  :)

> On simple cases, UAs can be smart enough to notice a sequence of updates so the
> rendering is only refreshed once.

UAs already do this.  The slow part with :has is not refreshing, in 
general, but figuring out which parts to refresh.

> On more complex cases, a good idea could be to add some sort of API to let the application warn the UA
> about what's going to happen. I've been doing a lot of .Net coding
> lately, so something as WinForms' SuspendLayout() and ResumeLayout()

This has been proposed.  Unfortunately, the typical web app would have a 
strong tendency to not call ResumeLayout in various cases; often 
UA-dependent (based on my experiences debugging such things in the 
past).  Add to that that in the near term they'd be in try/catch blocks 
or feature-tested, and UAs that don't implement them actually have a 
competitive advantage (in the "stuff doesn't break in this browser" 
sense).  Therefore there's little incentive for UAs to implement them.

> I think of two forms of flagging; and I'll admit that both are based
> on an assumption: this feature would normally be used on a small
> subset of the rulesets and elements of a page.

The former is probably a good assumption.  The latter may not be 
depending on what "used" means here....

> Also, I think it's reasonable for the cost of a feature to be proportional or dependant
> to how much it is used (if you disagree on this, I'd really like to
> hear (or read) your opinion).

I'm not sure what the statement means...  Are you saying that it's OK 
for a feature to cause all performance on the page to go down the tubes 
as long as not many pages use it?  Or something else?

> Form 1: An array of pointers to the "deferred" rulesets (any ruleset
> that uses this feature or, in the future, any other feature with
> similar performance issues), and an array of pointers to the
> "critical" elements for these rulesets. In the case of a "parent" or
> "ancestor" selector, this would be the any element that matches the
> part of the selector before ":has(".

This latter list is quite likely to include a significant fraction of 
the document, as I said earlier in this thread.

> With either form, changes within a "critical" element

Determining wheter a change is withing a "critical" element is not 
necessarily particularly cheap, right?  Or requires a bunch of 
state-propagation in the DOM.  Or both.

> would require recomputing the "deferred" styles

Meaning what?  Some set of nodes would need to have their styles 
recomputed.  What set?

> In the abstract, it seems that these (or any other) form of
> flagging would allow determining what needs to be updated much faster
> than by matching again each selector against the document tree on
> every DOM update.

What _really_ needs to happen is to:

1)  Determine what elements might have changes in the set of rules
     applying to them.
2)  For those elements, determine the new set of rules.

To avoid full selector matching step 2 would need to just test the rules 
that might have changed or not changed and remove them from the lists or 
add them in the right places in the lists.   It would also need to 
invalidate any information that was cached in the rule lists (in Gecko's 
case, for example, a fair amount) and so forth.  It may be faster to do 
this than to just redo matching.  It might not be.  Hard to tell up 
front.  It would certainly not be cheap.

As for the subset of the document affected (step 1 above), if we're 
talking about doing this for all descendants of "critical" elements then 
this will often be a big chunk of the overall DOM.

> (I'm not asking if this would be enough to
> solve the issues, just if it would be enough to solve or mitigate part
> of them.)

Sure.  If this were implemented something like that would absolutely 
have to be done.  It would slow down normal selector matching and style 
computation, I suspect, and it would not necessarily help enough in 
common-enough :has/:matches cases.  But the only way to tell for sure is 
to spend some time implementing it and see, of course.

> Again, please let me know: do you think my suggestion/idea about
> flagging would help mitigating that cost?

In some cases, yes.  In others, no.  In yet others it actually makes it 
more expensive than just restyling everything.

The question is the relative frequency of those cases...

>> Pretty much anything that involves spending more than 10ms on a set of DOM
>> mutation is a hit, right (since it's directly detectable from script as a
>> framerate fall)?
>
> How much of a hit would it be compared to a script that:
> 1) Tweaks the loaded CSS replacing each instance of the :has()
> selector with a new class name, and keeps a copy of the original in a
> variable.
> 2) Upon page load and upon each change of the document:
>    2.1) Checks for each element to see if a :has() rule applies to it.
>    2.2) If the rule would match, adds the relevant class name to the
> @class attribute of the matched element, so it matches the tweaked CSS
> rule.
>    2.3) If the rule doesn't match AND the element contains the relevant
> class, remove the class so it doesn't match the tweaked CSS rule
> anymore.

Much less, of course.  ;)

> I'm asking this because, for as long as UAs don't provide this
> feature, this is the only solution for the general case.

Which means that no one uses the general case, especially not by 
accident, right?

-Boris

Received on Thursday, 21 January 2010 19:05:54 UTC