- From: Maciej Stachowiak <mjs@apple.com>
- Date: Sun, 26 Feb 2006 15:07:13 -0800
- To: "Web APIs WG (public)" <public-webapi@w3.org>
We should consider placing computational complexity bounds on DOM methods in the future. For example, it would be perfectly legal to make getElementById be O(N) in the number of elements in the document, or even O(N^2). But in practice unless it is O(1) your implementation will not be interoperable with web content, as there will be web apps that will lock the user's machine for hours or even days. These don't necessarily have to be the tightest possible bounds. For example, requiring Node.nextSibling to be O(1) might limit implementations too much and an upper bound of O(log N) might be sufficient. Sometimes there are also information-theoretic limits to consider. For instance it is impossible to make both Node.insertBefore and Node.childNodes.item O(1) in the same implementation, although you could make either O(1) and the other O (N), or (I think) both O(log N). So maybe this is not a useful excercise for all methods, but for some where the performance issues are non-obvious, it may be helpful. There are also issues with testing this. Regards, Maciej
Received on Sunday, 26 February 2006 23:07:14 UTC