- From: Emilio Cobos Álvarez via GitHub <sysbot+gh@w3.org>
- Date: Wed, 26 Oct 2022 20:04:53 +0000
- To: public-css-archive@w3.org
Gecko's CSS parser (https://github.com/servo/rust-cssparser) doesn't keep tokens around other than the very last token we tokenized (as a performance optimization): https://github.com/servo/rust-cssparser/blob/d4784093d953c56efbb5aa79d94aed646e580e6b/src/parser.rs#L162 It has the ability to restart at arbitrary points. In fact we use them quite aggressively: https://github.com/servo/rust-cssparser/blob/d4784093d953c56efbb5aa79d94aed646e580e6b/src/parser.rs#L514-L529 So my understanding is that implementing something like this should have no extra overhead for valid declarations, since we already have the parser state from before the start of the declaration even: https://github.com/servo/rust-cssparser/blob/d4784093d953c56efbb5aa79d94aed646e580e6b/src/rules_and_declarations.rs#L243 It might have extra overhead for invalid declarations however, which are unsurprisingly rather common (due to vendor prefixes and other stuff...), so it'd need some measurement with different inputs to see how bad it is... It should be feasible to measure this either inside or outside of Firefox btw, happy to help if someone wants to give that a shot. -- GitHub Notification of comment by emilio Please view or discuss this issue at https://github.com/w3c/csswg-drafts/issues/7961#issuecomment-1292593144 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Wednesday, 26 October 2022 20:04:54 UTC