- From: Steinar H. Gunderson via GitHub <sysbot+gh@w3.org>
- Date: Mon, 16 Jan 2023 10:17:10 +0000
- To: public-css-archive@w3.org
> I would like to know what an implementer thinks @sesse It's a bit tricky to comment on this, because I don't think the boundaries of the proposal really have been fully laid out yet. So let me first assume that we don't mean to support token pasting (e.g. `.foo { &_bar { … }}` becoming `.foo_bar { … }`. Second, do we intend to do this expansion before or after CSSOM? Let me give a _very_ short introduction to how CSS selectors work in Chromium, to the point over oversimplification. We store two different representation of a stylesheet in memory, where one is roughly an array (for parsing and CSSOM purposes) and the other is roughly a hash table (for matching purposes). The former looks very much like what you'd write in a stylesheet, which the notable exception that selectors with selector lists are parsed into a tree structure and the combinator set is a bit more explicit (each and every simple selector has a combinator towards the next element). E.g., if you write `.a .b:is(.c, .d)`, what you get is basically `.a <descendant> .b <subselector> :is(<mumble>) <end-of-complex-selector-list>`, where the `<mumble>` part is a pointer in memory to another selector list (`.c <end-of-complex-selector> .d <end-of-complex-selector-list>`). The second, which we actually use for matching, uses the same complex selector representation, but is otherwise completely different. In particular, this representation rests pretty firmly on the concept of the (complex) selector as a unit; if an author writes `.a, #id { … }` in the former, that becomes two entirely distinct rules in the latter. We also store some information like hashes of ancestor elements (for quick rejection of rules where the subject matches but the ancestors do not) in this representation, and the original flat array is completely broken up and is scattered in bits and pieces. Supporting the current spec is super-simple for us. Whenever we see `&`, we treat is nearly identical to an `:is(<mumble>)` selector; it's just that instead of following the pointer to a new list, we store the pointer to the parent rule instead, and when it's time to match, we just go to its selector list, just as if we had an `:is`. Nearly everything else falls out of that; specificity, parent rejection, and so on. Now let's look at the case where we are to treat `&` as just a selector list paste, under the example in question. There are basically two options here; expanding the selectors during the conversion from the first to the second form, or not. Let's look at the second first. Consider a case such as `.a { .b & .c { … }}`. The inner rule would be stored as `. b <descendant> & .c <end-of-complex-selector-list>`. So when matching, generating specificy, descendant hashes etc., we'd need to see the `&`, make a sort-of goto (or gosub!) jump into the upper rule, and then when seeing the end of the complex selector list in the parent, jump back. This is _possible_, but it's going to be pretty slow; right now, moving along this list is extremely fast (always just a pointer add), and now we'd introduce a branch. It's feasible, but it would slow down selector handling universally for us. However, the bigger problem here is selector lists in the parent. I can't find any usable ways of dealing with that at all. Not only would we need to know ahead of time that we'd need to start matching of the inner rule twice, and then know which of the two parents to jump to; we wouldn't know where to store the rule in the first place. (We absolutely need two selectors `.a` and `#id` to be stored different places; where would we store `& { … } ` if the parent was `.a, #id { }`?) So we need to expand; there's no way around it. I see that people think this is a lower cost for us than Sass, but it absolutely isn't. The wire cost is real, but it can be compressed, and our memory representation is going to be _much_ larger than what you'd see on the wire even without compression—and Sass only needs to run once, whereas we'd pay the CPU costs on load (when we recreate the second form, either due to load or a stylesheet change) and the RAM cost for as long as the page is loaded. I also see people claim that both the current spec and an expansion-like form are “multiplicative”, but that is entirely wrong; the current spec is _additive_ (n rules always cost O(n) memory, O(n) setup cost and usually O(1) match cost—O(n) at most) and expansion is indeed multiplicative, leading to exponential behavior when chained (n rules can cost O(2^n) memory, O(2^n) setup cost—match cost is still usually O(1), but now O(2^n) at most). Authors already know next to nothing about style performance. If you give them this kind of behavior, it's just a huge footgun, and there's no way they will understand that their innocent CSS blows up completely in the browser. We could try to put limits on this, but it's not entirely clear how that would work. Where would these limits be enforced? What would happen if they are violated; would rules stop matching silently? What happens when some of the rules get modified in CSSOM; does the modification fail, or does something else happen? If we didn't have complex selector lists in the parent, we wouldn't have these problems. We could expand; it would cost us some memory and be a bit slower in the setup phase, but it would be restricted to rules that actually use nesting. Even without parent lists, I don't really think trying to match `&` directly by goto/gosub would be a very attractive option, for speed reasons (and I can imagine WebKit's JITing of selectors would also potentially run into issues here, without ever having looked much at their JIT). The third alternative is the infamous Sass heuristics, but if so, someone would have to specify what those are if we are to discuss them. And again, remember that they'd have to be efficiently implementable very quickly (whenever a stylesheet loads or changes), so it's not a given that we can just copy Sass' implementation. As a separate note, it is a problem to try to radically change the spec basically at the moment we are trying to ship it. This is a great way to discourage people from being the first to ship a feature. -- GitHub Notification of comment by sesse Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/8310#issuecomment-1383810771 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Monday, 16 January 2023 10:17:12 UTC