Re: Asymptotic computational complexity bounds for DOM methods

On Feb 27, 2006, at 1:15 AM, L. David Baron wrote:

> On Sunday 2006-02-26 15:07 -0800, Maciej Stachowiak wrote:
>> These don't necessarily have to be the tightest possible bounds. For
>> example, requiring Node.nextSibling to be O(1) might limit
>> implementations too much and an upper bound of O(log N) might be
>> sufficient. Sometimes there are also information-theoretic limits to
>
> Mozilla's implementation of Node.nextSibling is O(N) in the number of
> (prior) siblings of the element.

Are you sure? I tried this test, with varying numbers of spans in the  
div. With an O(N) implementation of nextSibling, it should be taking O 
(N^2) time in the number of spans, but the scaling seems to be linear  
in Firefox 1.5 as expected. With ~2000 spans I get a time of 134,  
which is about 2x the 68 I get with ~1000. So that's evidence that it  
is at least amortized constant time.

<div id="foo">
<span></span>
</div>
<script>
window.onload = function() {
var foo = document.getElementById("foo");
var n = foo.firstChild;
var startTime = new Date();
while (n) {
     n = n.nextSibling;
}
var endTime = new Date();
alert(endTime - startTime);
}
</script>


Similarly with this test I could not find any change in the timings  
regardless of number of spans before the "nextToLast" span, even when  
I added tens of thousands, which seems like evidence that it is in  
fact constant time:

<div>
<span></span>
<span id="nextToLast"></span>
<span></span>
</div>

<script>
window.onload = function() {
var n = document.getElementById("nextToLast")
var startTime = new Date();
n = n.nextSibling;
var endTime = new Date();
alert(endTime - startTime);
}
</script>


Regards,
Maciej

Received on Monday, 27 February 2006 09:51:29 UTC