- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Thu, 21 Jan 2010 12:28:40 -0500
- To: Giuseppe Bilotta <giuseppe.bilotta@gmail.com>, www-style list <www-style@w3.org>
On 1/21/10 12:11 PM, Giuseppe Bilotta wrote: > When the DOM changes, the UA still has to rerender everything below > _and_ above the changed elements. That's not true at all. It has to rerender things that changed. These are usually quite limited in scope. The problem with :has is that the process of determining what things have changed can become very very expensive. > Often, an element has to look at all > its descendant to determine things such as its dimensions With :has the issue is having to potentially look at every single thing in the document, not just descendants of the thing that changed... This is all a solvable problem, in theory. One could restrict the performance hit to pages that use :has, at cost in code complexity (and unexplained performance degradation if someone adds a :has selector). One could pay a significant cost in memory and code complexity to make the process faster, at the obvious expense of performance and memory usage in general. In practice, no one has figured out a sane way to do it yet. It'd be really nice to have; everyone agrees with that. The problem is the how. > I don't think this would be TOO much of a hit, would it? Pretty much anything that involves spending more than 10ms on a set of DOM mutation is a hit, right (since it's directly detectable from script as a framerate fall)? Just for reference, redoing selector matching on, say, the html5 single-page spec (which has a lot of nodes, but not exactly a lot of rules) takes on the order of 700ms in current Gecko trunk. Based on my profiles, the Webkit numbes are comparable. On gmail, it takes about 10-20ms over here. On google reader it takes about 15-30ms. On the slashdot front page it takes about 20-40ms. On the nytimes.com front page, 20-40ms. -Boris
Received on Thursday, 21 January 2010 17:29:15 UTC