Re: "fast vs complete" is "SAX vs DOM"? and the IDs?

On Sat, Feb 28, 2015 at 10:04 PM, Peter Krauss <ppkrauss@gmail.com> wrote:
> Are there a formal or more complete definition for fast-profile than this
> one?
>    http://dev.w3.org/csswg/selectors-4/#profiles
> what is "fast"?
>
> - - - -
> Some blogs and discussions try to explain what is "slow" in the fast-profile
> concept...
> I learned from a  blog, and now, personaly I understand that
>     the "fast-profile" is the hypothesis of use an "online algorithm"
>     https://en.wikipedia.org/wiki/Online_algorithm
> so, in XML terms, the fas-side is a good analog of the SAX-side in the "SAX
> vs DOM" dicotomy.
>
> In the fast-profile the document is traversed once, matching elements as the
> online-algorithm go...
> Ok, but it is strictly online, like in a streaming problem, or...
>    we can suppose some pre-parse for know all IDs?
>    can suppose "memory of IDs of all parsed above"? "memory of all classes
> above"?
>    ... other suppositions?
>  what the real and complete scenario of a CSS-parser in the fast-profile?
>
> - - - -
>
> PS: it is a "benchmark game" with black-boxes (the future) or white-boxes
> (the present)?
> The near-future of CSS parsers is a gray-box... It's fine: let's put some
> rules in this game!

Strictly speaking, "fast" means... fast.  There are a handful of
things we think are useful selectors, but browsers believe they can't
implement in a way that's fast enough to be used in CSS proper; some
of them would slow down the entire page just by being used once in a
stylesheet.

The distinction can be analogized to the difference between SAX and
DOM, but that's not a strict analogy.  SAX vs DOM is streaming vs
retained; "fast" vs "complete" profiles are "continuously matching"
versus "matches once".

Like Boris said in the previous thread, browsers don't actually match
selectors against a tree (though, for simplicity, that's how we
describe it in the specs).  Instead, they take individual elements and
see which selectors apply to them.  Initially this is done during
page-load, as each element is parsed from the document's text; later
this happens on each mutation, as things change on the page.  While
these two situations have fairly different requirements, they both
boil down to the fact that more complicated tree traversals make them
*significantly* slower.  (During page-load, you might not be able to
fully determine the style of a node until the whole page loads, which
means you can't accurately display the page as it loads.  Later, it
means that after a mutation the set of "dirty elements" that need to
be checked for possible changes in which selectors they match gets
much larger, possibly encompassing the entire tree.)

On the other hand, a one-and-done selector match like the Selectors
API doesn't suffer so badly.  A more complicated tree traversal is
more expensive than a simpler one, sure, but it's not a huge
difference.  Plus, if you don't offer the more complicated selectors
directly, people will just do the tree traversals themselves in JS,
and probably be slower than the browser could be, so omitting them
doesn't actually make pages faster for users.

And yes,the definition of what fits into "fast" vs "complete" will
change over time, as browsers figure out clever tricks to optimize
things, or computers get faster.

~TJ

Received on Sunday, 1 March 2015 19:05:07 UTC