- From: Boris Zbarsky <bzbarsky@MIT.EDU>
- Date: Sat, 02 Jul 2011 10:07:28 -0400
- To: public-webapps@w3.org
On 7/2/11 6:28 AM, Dave Raggett wrote: > My use case involves multiple people simultaneously editing the same > document. The mutations due to user actions are batched and serialized > as JSON. If you know that a given node was moved then you can avoid the > overhead of serializing a full description of its attributes and > content, as is necessary when describing a node to be inserted. OK, that's similar to David's use case. Do we have any data on how common the "move" pattern is compared to the "remove and insert" pattern? That is, does having such an optimization help in practice? > If the browser is able to build a list of all the mutations involved in > a given user action, this would presumably be more efficient than > leaving this to web page scripts to do. Well, sure, for the cases when web script would build such a list. Is that the common case for mutation consumers? For Gecko's internal mutation consumers, this is NOT a common case; the vast majority of them just want to know that "something changed" because attempting to synchronize state is too complicated to be worth it in most of those cases. An exception is the code managing the CSS box tree, but this has other requirements as well (and is _very_ complicated because it's considered performance-critical). > It is critically important to know what nodes have been inserted, > removed, moved, or have had their attributes changed. For some use cases, this is a useful optimization, yes. Are those cases the common case? > If all you know is that some of the children have changed for a given node, the script has > to do a lot of work to find out which have changed and in what manner, > and this will probably involve keeping a local duplicate of the DOM tree > at considerable cost. That may be ok, if the use cases that incur this cost are rare and the common case can be better served by a different approach. Or put another way, if 1% of consumers want the full list because it makes them 4x faster and the other 99% don't want the full list, and the full list is 3x slower for the browser to build than just providing the information the 99% want, what's the right tradeoff? The numbers above are made up, of course; it would be useful to have some hard data on the actual use cases. Maybe we need both sorts of APIs: one which generates a fine-grained change list and incurs a noticeable DOM mutation performance hit and one which batches changes more but doesn't slow the browser down as much... -Boris
Received on Saturday, 2 July 2011 14:08:08 UTC