W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2009

Re: [selectors-api] Summary of Feature Requests for v2

From: Boris Zbarsky <bzbarsky@MIT.EDU>
Date: Thu, 24 Sep 2009 20:06:47 -0400
Message-ID: <4ABC0997.80503@mit.edu>
To: Sean Hogan <shogun70@westnet.com.au>
CC: public-webapps <public-webapps@w3.org>
On 9/24/09 6:45 PM, Sean Hogan wrote:
> That is surprising. Does the CSS engine do the same? If the CSS engine
> doesn't store the parsed selector then it probably doesn't matter for JS
> calls either.

In Gecko the CSS engine stores the parsed selector.  In addition, it 
stores the selectors in various bins in a data structure to make 
matching faster.  In practice this means that you don't have to actually 
match most nodes against most selectors when computing the set of rules 
that match a given node.  This makes sense because you're guaranteed 
that every time a node is inserted into the DOM you will have to match 
it against every single one of those selectors. 
https://developer.mozilla.org/en/Writing_Efficient_CSS has a description 
of the setup.  I believe Webkit has something similar.  Again, I can't 
speak to Trident or Presto.

In the querySelector(All) case, the browser has no way to know that the 
selector will ever be reused.  In practice, the native implementations 
were enough faster than what they were replacing, even without any 
particularly fancy optimizations, that simplicity was judged more 
important than squeezing every bit of performance out.  At least in 
Gecko's case.  If we get to the point where they're being a bottleneck 
again, that will likely be revisited.

> Take a event-delegation system that uses matchesSelector.
> Every event that it handles will walk the event path trying
> element.matchesSelector with every registered handler.
> e.g. There are twenty registered click handlers and a click event occurs
> on an element ten levels deep. There could be 20 * 10 = 200 calls to
> matchesSelector. Or 400 if the system simulates capture phase as well.

200 calls would equate to ~1ms of selector parsing time in the the case 
of Gecko.  For a click event, that's not terrible.

> Or take a framework that adds enhancements to HTML elements based on
> selectors.
> The framework wants to handle dynamic insertion to / removal from the
> page, so every DOMNodeInserted / DOMNodeRemoved (or equivalent) it will
> call querySelectorAll for all registered enhancements to see if there is
> any work to do.

This could be much more of a problem.  I'd want be interested in what 
the actual performance is like in this situation.  Remember, the 
selector-parsing time was just the overhead; the real time usage is 
walking the DOM and doing the matching.  For matchesSelector this is 
much less significant, of course, but for querySelectorAll it's likely 
to be the dominating factor (gut feeling; if someone wants to measure 
that would be welcome).

I also wonder how well XBL or something like that would handle cases 
like this...  This setup (matching every node in a subtree against a set 
of selectors) is really not that well served by any of the APIs 
described here.  It's much closer to the CSS use case and would benefit 
from similar optimizations.

Note that I don't have anything against exposing "parsed selector" 
objects in JS.  I don't think it would be that difficult to implement 
it.  I'm just not sure whether the added complexity is really needed, 
and whether it's the best solution for the use cases.  Maybe it is; I'm 
just gathering data.  This is not exactly my area of expertise.

-Boris
Received on Friday, 25 September 2009 00:10:56 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:33 GMT