Re: Do we need the restrictions on the <base> element?

Laurens Holst wrote:
> I don’t think this is really a big deal. O(1) is obviously better than 
> O(log n), but O(log n) is still fast.

It's fast as a one-off.  If you have to do it a lot, it's noticeably slower. 
People usually complain when some DOM operation becomes 2-3 times slower than it 
used to be, in my experience.

> And how recursively looking up 
> some value would make memory usage ‘much bigger’ I don’t know.

Please read what I wrote again.  Carefully, this time.  I said that the options 
are either a perf hit (lookups every time) or a memory hit (cache the value).

> Also, in your case of image swapping, there is network overhead for 
> retrieving the new image.

No, there isn't.  Once both of the images being swapped between are loaded 
(during pageload), the only overhead is painting the image -- both images are 
already stored in decoded form in platform bitmaps, usually in the graphics 
card's memory.  So the painting is quite fast.

> This way outweighs the insignificant extra 
> time it takes to perform the O(log n) operation for resolving the baseURI.

Do you have actual data to back this up?  Or just feelings?  I have profiles 
showing that the baseURI issue is a significant component of image swapping 
performance in Gecko.

> * Nowhere is it specified that this should actually happen. *

If you're willing to accept hysteresis in your DOM, sure.  But generally, it's 
accepted that as much as possible the rendering of the document should reflect 
the state of the DOM, not how the DOM got into that state.  If you abandon this 
principle, all sorts of "optimizations" become possible.  ;)

And yes, I know that <html:base> already violates this principle. This is for 
compat reasons as much as anything else, harking back to the days of Netscape 
and its stack-based non-DOM handling of HTML.

> The current implementation of Firefox does not resolve baseURI 
> dynamically. Instead, it seems baseURI is a constant which is set on 
> every element while parsing.

That's certainly not the case for XML nodes in Firefox.  For HTML, this is true.

> Note that contrary to your statement in 
> your followup-mail to Henrik Dvergsdal’s message ("you'll note that 
> Firefox doesn't implement xml:base for XHTML"), Firefox *does* support 
> xml:base.

Ah, looks like we changed that.  Very good.

> I guess Firefox’s current behaviour is that while it’s parsing, it 
> creates a constant baseURI attribute for each element.

In HTML.  And it's actually stored on the document, unless there are multiple 
<base> tags, in which case we do store things on elements.

> Any overhead because the retrieval operation changes from O(1) to O(log 
> n) will probably not be noticable

Again, I'd love to see hard data backing this up.

> because 1. the current method has an 
> initial O(n) operation

It doesn't, since the base URI is stored on the document.

> any overhead of O(log n) compared to 
> O(1) will be insignificant because the result would be an HTTP retrieval 
> operation.

See beginning of this mail.

> Finally, if you really want to you can optimise the whole baseURI 
> implementation by making the getter check for a global flag which 
> indicates whether "xml:base" is used anywhere.

Yep, so that any page that uses it gets an immediate performance hit (just like 
mutation events).  But is that desirable?  Again, authors aren't happy when DOM 
methods start being 2-3 times slower just because someone stuck a single 
attribute somewhere in the document.

> 3. Not support dynamic operations of any kind on base URIs. This is the 
> current behaviour of Firefox’s.

That statement is false, certainly for XHTML, given that Firefox does support 
xml:base for it.

-Boris

Received on Monday, 4 June 2007 18:25:34 UTC