W3C home > Mailing lists > Public > www-dom@w3.org > October to December 2011

Re: modifying the DOM WAS: Node append

From: Sean Hogan <shogun70@westnet.com.au>
Date: Thu, 06 Oct 2011 23:39:43 +1100
Message-ID: <4E8DA18F.10902@westnet.com.au>
To: Boris Zbarsky <bzbarsky@MIT.EDU>
CC: Ojan Vafai <ojan@chromium.org>, Anne van Kesteren <annevk@opera.com>, Aryeh Gregor <ayg@aryeh.name>, Erik Arvidsson <arv@chromium.org>, Olli@pettay.fi, Robin Berjon <robin@berjon.com>, www-dom@w3.org, Alex Russell <slightlyoff@chromium.org>
On 6/10/11 11:05 PM, Boris Zbarsky wrote:
> On 10/6/11 7:40 AM, Sean Hogan wrote:
>> One of the potential benefits of these proposed methods (when called
>> with an array-ish of nodes) is improved performance as several DOM calls
>> are replaced with one.
> I'm actually somewhat dubious of that... in particular, for the 
> existing methods a good type-specializing JIT can generate pretty good 
> code to call into the DOM fast (much faster than current UAs; at least 
> some UAs are working on this long-term).  For a method that needs to 
> deal with overloads and all the resulting complexity just the time 
> needed for that might eat up any wins from only having to go from JS 
> to C++ once... especially if the resulting C++ has to keep calling 
> back into JS a bunch of times to actually get the items out of the 
> array-ish.  This can be special-cased for nodelists, of course....
> The point being that the performance tradeoff is actually not obvious 
> here.

Thanks. My reading of that is that for an array of nodes the performance 
can't possibly be better than calling (say) appendChild() separately for 
each node, due to all the JS <-> C++ transitions. Is that correct?

Received on Thursday, 6 October 2011 12:40:11 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:36:59 UTC