W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2009

Re: [selectors-api] Summary of Feature Requests for v2

From: John Resig <jresig@mozilla.com>
Date: Thu, 24 Sep 2009 17:09:25 -0400
Message-ID: <730bab940909241409obf8b02fsbbef841b83d8400b@mail.gmail.com>
To: Jonas Sicking <jonas@sicking.cc>
Cc: Lachlan Hunt <lachlan.hunt@lachy.id.au>, public-webapps <public-webapps@w3.org>
> My concern with this API is that it forces the implementation to
> always sort the array, even if already sorted, and then do a merge
> sort on the individual results from querySelectorAll. It would be
> faster to simply run the query on each node, and then merge sort the
> results.

That's not a huge issue - if a NodeList is coming in then we can assume that
it's already containing unique results and is in document order. It's only
if it's an array that we have to do the dance. Even in the case where the
array of results is already in document order the sort will be incredibly
fast (O(N)).

>  > If this is how it's implemented it actually becomes really useful to
> have
> > the NodeList-based element filtering.
> >
> >     document.createNodeList([ ... some elements ...
> ]).filterSelector("em,
> > strong")
> >
> > (Since this would be much faster than using Array.filter or some other
> > method.)
> Are you sure that Array.filter would result in a significant perf hit?
> What with recent jitting and stuff it would be great if we don't have
> to rely on making everything a native method, but can rely on
> javascript to do part of the work without taking too much of a perf
> hit.

I can guarantee that it'll be slower than doing it natively - especially so
in Internet Explorer 8.next (the more that they do under the covers the
faster we can provide results).

Received on Thursday, 24 September 2009 21:10:27 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:26:18 UTC