W3C home > Mailing lists > Public > www-style@w3.org > January 2010

Re: Proposal: parent selectors

From: Eduard Pascual <herenvardo@gmail.com>
Date: Thu, 21 Jan 2010 19:37:09 +0100
Message-ID: <6ea53251001211037n5da7b9betbd5ba30ed2667566@mail.gmail.com>
To: Boris Zbarsky <bzbarsky@mit.edu>
Cc: www-style@w3.org
On Thu, Jan 21, 2010 at 5:48 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:
> On 1/21/10 11:36 AM, Eduard Pascual wrote:
>>
>> Fact #1: whatever that can be achieved through jQuery, could be done
>> faster through a native implementation within the UA.
>
> Agreed.
>
> But people aren't asking for a jquery-equivalent feature (that is, one where
> you would perform the selector-match once and be done).  They're asking for
> a feature where one has to constantly track which nodes match the selector
> in the face of DOM mutations.  That's something jquery does NOT do last I
> checked.
Allright, my mistake on overstimating jQuery (honestly, I haven't
really used it, I have just looked at it from time to time because it
was the "reference" for many of these discussions).
So, my argument has no value on the "dynamic DOM alterations" part,
but still I hope we can agree on the idea that deferred processing can
help on the "insanely inefficient for progressive rendering".

>> Question: if :has() (or any equivalent feature regardless of syntax)
>> was implemented, would it need to take profit of progressive
>> rendering?
>> Answer, Fact #2: No.
>> Proof: Those authors who really need :has()'s functionality rely on
>> jQuery, so they don't get progressive rendering. It's a trade-off, and
>> the volume of web authors relying on jQuery are a proof that it's a
>> good deal on many cases. Even for newbie authors who don't understand
>> the efficiency costs of this feature, if they have a minimum idea of
>> what they are doing they'll test their pages on at least one browser,
>> so they'll get an idea on how fast or slow they load.
>
> It's not just progressive rendering.  DOM mutations happen after the
> document has loaded.  On some sites, every 10ms, forever.  On other sites,
> thousands or tens of thousands at once.
I know it's not just progressive rendering. Progressive rendering is
the part that affects all pages; DOM mutations affect a subset of
pages. On the extreme cases of mutations every 10ms, I would expect
the authors to know what they are doing: that'd be a web application
rather than a web document. It's a matter of common sense to test an
application during development. If a developer finds that an
application is going painfully slow, the usual thing to do is to
remove or polish the parts of it that cause the lag.
In addition, on the context of dynamic applications, a parent selector
wouldn't be as needed as on scriptless context: it may always be
emulated by putting a class or event handler on the "parent" element,
and doing some tinkering through scripts.

For the case of thousands (or more) of updates at once, that's IMO a
matter of UA- and application- specific optimizations. On simple
cases, UAs can be smart enough to notice a sequence of updates so the
rendering is only refreshed once. On more complex cases, a good idea
could be to add some sort of API to let the application warn the UA
about what's going to happen. I've been doing a lot of .Net coding
lately, so something as WinForms' SuspendLayout() and ResumeLayout()
come to mind (it's a different context, but exactly the same purpose):
the UA could expose "SuspendRenderingUpdates()" and
"ResumeRenderingUpdates()" (the names are arbitrary, any reasonable
name would do).

>
>> Based on the above, here it goes:
>> Suggestion for UA vendors: implement some version of this feature
>> experimentally (maybe a limited version, such as the "child" and/or
>> "next-sibling" versions that have been discussed previously), with
>> vendor prefixes or whatever, that defers processing of rules using
>> this feature until the rest of the document has been loaded.
>
> So process them once at DOMContentLoaded and then never process them again?
>  If the DOM changes, ignore any effect that might have on :has() matching?
>  Is that the proposal?
Nope. To begin with, I wasn't making a proposal, just a suggestion (a
proposal would be specific and concise, a suggestion is deliberately
vague and open to interpretation, analysis, discussion, and
experimentation). Well, straight to my wording, that could match the
wording of my suggestion, since it could be considered a "limited
version" of the feature. And the fact is that it would already be more
useful than no parent selector at all (just count that it would solve
the use cases for any static page, and probably even some dynamic
pages). However, you stripped the last part of my suggestion from the
quote, which actually dealt with the topic of dynamic DOM mutations:
the idea of flagging.

Since you seem to have missed it, I'll go a bit deeper on it (I can't
go too deep anyway, since I don't know too much on the inner workings
of CSS rendering engines):
I think of two forms of flagging; and I'll admit that both are based
on an assumption: this feature would normally be used on a small
subset of the rulesets and elements of a page. Also, I think it's
reasonable for the cost of a feature to be proportional or dependant
to how much it is used (if you disagree on this, I'd really like to
hear (or read) your opinion).
Form 1: An array of pointers to the "deferred" rulesets (any ruleset
that uses this feature or, in the future, any other feature with
similar performance issues), and an array of pointers to the
"critical" elements for these rulesets. In the case of a "parent" or
"ancestor" selector, this would be the any element that matches the
part of the selector before ":has(". For a "full-powered"
implementation of :matches(<selector>), that'd be the parent of the
element matching the left part of the selector, so the siblings of the
matching element are also enclosed.
Form 2: Instead of keeping the two arrays, add a boolean flag to the
in-memory data structures that represent the "deferred" rulesets and
the "critical" elements.
With either form, changes within a "critical" element would require
recomputing the "deferred" styles, but other changes wouldn't be
affected. In the abstract, it seems that these (or any other) form of
flagging would allow determining what needs to be updated much faster
than by matching again each selector against the document tree on
every DOM update. In practice... well, I have no practical experience
on the topic, so I'd like to know your opinion about this: do you
think this would help? (I'm not asking if this would be enough to
solve the issues, just if it would be enough to solve or mitigate part
of them.)

Clarified that point, let me go back: my suggestion was to just give
this a try: even a naive implementation that just re-renders
everything would already be useful for testing and figuring out
patterns that would guide towards ways to optimize things. I'm not
suggesting specifically that kind of implementation; but any kind,
even if it's for a limited variant of the feature. Once someone makes
it, then it comes the time to improve it ;-)

On Thu, Jan 21, 2010 at 6:28 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:
> On 1/21/10 12:11 PM, Giuseppe Bilotta wrote:
>>
>> When the DOM changes, the UA still has to rerender everything below
>> _and_ above the changed elements.
>
> That's not true at all.  It has to rerender things that changed.  These are
> usually quite limited in scope.
>
> The problem with :has is that the process of determining what things have
> changed can become very very expensive.
Again, please let me know: do you think my suggestion/idea about
flagging would help mitigating that cost? (If the answer is not, I'd
appreciate a brief reasoning on it, just to figure out what did I got
wrong). That was the very purpose of the idea; so if it won't do I'll
start thinking on something else (and knowing why it can't help would
be really useful on that).

>> I don't think this would be TOO much of a hit, would it?
>
> Pretty much anything that involves spending more than 10ms on a set of DOM
> mutation is a hit, right (since it's directly detectable from script as a
> framerate fall)?

How much of a hit would it be compared to a script that:
1) Tweaks the loaded CSS replacing each instance of the :has()
selector with a new class name, and keeps a copy of the original in a
variable.
2) Upon page load and upon each change of the document:
  2.1) Checks for each element to see if a :has() rule applies to it.
  2.2) If the rule would match, adds the relevant class name to the
@class attribute of the matched element, so it matches the tweaked CSS
rule.
  2.3) If the rule doesn't match AND the element contains the relevant
class, remove the class so it doesn't match the tweaked CSS rule
anymore.
? I'm asking this because, for as long as UAs don't provide this
feature, this is the only solution for the general case. Any
alternative better than this would hence be an improvement to the Web
as a whole.
Of course, scripts that handle this from a page- or application-
specific view could be quite more efficient; but they take a good deal
of work and are a pain to maintain; eating up development resources
that could be otherwise be allocated on adding new features to the
site and improving existing ones.
Received on Thursday, 21 January 2010 18:38:23 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:23 GMT