- From: Jonas Sicking <jonas@sicking.cc>
- Date: Sat, 27 Dec 2008 18:46:38 +0100
- To: "Giovanni Campagna" <scampa.giovanni@gmail.com>
- Cc: public-webapps@w3.org
So I consider the main reason for the Selector API is that many more developers are familiar with selectors than are familiar with XPath. I base this on the fact that many more pages use CSS stylesheets than use XPath, by several orders of magnitude (probably more than 3). It is also demonstrated by the fact that many popular JS toolkits supply an API to let you query for elements using a selector. So while XPath is able to fetch the same information as the new Selector API it requires one of two things to happen: 1. Either the author needs to learn XPath in addition to the Selector syntax. 2. A library needs to translate the CSS selector into an XPath expression. Today 2 seems to be what happens most of the time. As Boris pointed out, this has been shown to have some rather unfortunate performance penalties, see [1]. There is also another argument other than 'people already know selectors' that I have heard. It is quite possible that they want to use the *same* selectors in a stylesheet as in the JS code. This advantage would hold true even for authors that know both XPath as well as selectors. However, I have not heard many examples of when this would happen so I wouldn't rely on this argument too heavily. I don't understand your argument that you have to learn the latest version of the selectors language. You can just stick with whatever subset you know, even if that isn't even all of level 1. Yes, in some cases you can just stick with DOM-Core, but for those cases nothing of what we're discussing here matters at all, you shouldn't use XPath for those either. So I'm not sure why you bring that up? On the flip side, I don't buy the performance argument that selectors are faster than XPath, when the raw API is used directly. While selectors were designed with higher constraints regarding performance, this is largely related to performance in case of modifications to the document. I.e. a CSS implementation needs to track dependencies and figure out which selectors to reevaluate when a given change happens to the document. Such requirements does not apply here. You could equally much claim that selectors were designed with "does this node match that selector" in mind, whereas XPath was designed with "give all the nodes that match this expression" in mind. The latter is what we are doing here. The performance benefit that exists in [1] is mostly likely due to not having to translate the selector into XPath. There is also the fact that the somewhat more powerful syntax of XPath might allow more things to be done in XPath and less require JS filtering after the XPath/selector has been run. Then there is the fact that if someone does document.querySelector("html > body > p[foo] > span[bar]") most (all?) current implementations walk the whole DOM trying to match each node, whereas an XPath implementation will only walk the necessary nodes for an expression like "html/body/p[@foo]/span[@bar]". This will likely become more important once :scope (or whatever it'll be called) starts to get used as we'll have selectors like ":scope > div.foo" On the other hand, selectors have fast class matching, which XPath 1 doesn't have. Not sure if XPath 2 does, but I haven't heard anyone expressing interest in implementing that. [1] http://ejohn.org/blog/queryselectorall-in-firefox-31/ / Jonas
Received on Saturday, 27 December 2008 17:47:19 UTC