Re: Do we need the restrictions on the <base> element?

Boris Zbarsky schreef:
>> XBL is again just one of the languages. If you’d include XBL directly 
>> into HTML
>
> The question is why you would do this.  The whole point of XBL is to 
> have a shared binding definition living outside the HTML file, with 
> all URI resolution in that binding happening relative to the 
> _binding_.  Including the XBL inline more or less defeats the purpose, 
> unless you're using it multiple times in that one page.

When using xi:include, the shared binding definition file still lives 
outside the HTML file, and all URI resolution *will* happen relative to 
the binding. However the inclusion method is that it’s placed directly 
into the document, forming a compound document. I’m not sure what you 
mean by using it multiple times defeating the purpose; the purpose of 
XBL is allowing the user to create bindings, whether or not that binding 
is in a separate document is just a technical detail. Similar to 
stylesheets, which can be referenced or included inline.

One reason why one would want to include it inline (by resolving the 
xi:includes on the server) would for example be to reduce the number of 
file requests to the server, which can significantly increase 
performance on high-latency connections. To illustrate this, e.g. from a 
server in Europe to Japan latency can be 300ms per request. With only 2 
concurrent requests this creates a 1s delay for every 6 files, and for 
files such as scripts the browser is even limited to only 1 concurrent 
request, so then it takes 2 seconds to load.

Also, this has effect on the DOM because the included files are directly 
accessible inside the main document, which could also be more convenient.

>> These results make perfect sense because, is a couple of simple "if 
>> (!@xml:base) return parent.baseURI;" iterations really going to make 
>> any noticable difference?
>
> Depending on how many other attributes the elements have and how many 
> elements there are, absolutely.  Note that the 10-deep nesting is 
> about an order of magnitude less than what happens on typical 
> websites, by the way.  And there are usually more attributes flying 
> around.

It can easily be optimised by setting a private boolean flag on elements 
‘hasXMLBase’, which initialises to false and is set to true in the 
handler for the xml:base attribute. Then it’s a true boolean, without 
any attribute-enumeration overhead.

>> Binary check, method call, return value, these are some of the 
>> simplest operations possible.
>
> Binary check in this case is not necessarily O(1) in number of 
> attributes. Method call can be a very expensive proposition (virtual 
> methods).

As I explained above, it can be an O(1) operation.

I do not see why this method call would be an expensive operation, but 
if you really think it would, then it could just as well be replaced by 
a while loop that iterates upwards over the tree. It’s just that the 
recursive version is nicer code-style. It’s really the same thing in the 
end after compilation, with the small difference that while loops use 
jumps (branch, I think it is in Intel terminology) and method calls use 
calls and the stack. But we’re talking about mere cycles of difference here.

But anyway, we’re going around in circles here. I think my tests 
demonstrated that there isn’t any noticeable impact. I KNOW that things 
don’t just happen and cost some time to execute (I have a Z80 assembly 
background, fwiw), but please put it in perspective. These are language 
primitives, and although they do cause overhead this is usually 
negligible relative to other operations that are also happening. To give 
just one example, the processing overhead of Javascript itself.

Now if this were a Z80 running at 3.5MHz I might bother about optimising 
the small cases if I really really need to have that interrupt routine 
be executed before the next one starts, every 50th or 60th of a second. 
But this is gigaherz 32-bits processors this code is running on (even 
mobile phones far outpace the good old MSX). If there is such a load on 
the CPU that there are performance problems, then these few operations 
are *absolutely insignificant*.

A good way to judge these kind of things is by I imagining that I would 
implement it on my Z80 3.5MHz MSX computer from 1986. In this case, I 
would care less about having to do a few recursive method calls to look 
up the base URI. The amount of cycles that would be needed to run 
Javascript in the first place, to implement the RFC 2396 URL resolution 
algorithm (please take a look at [1], and that’s still simple because I 
can use regular expressions), to load the resource from the network and 
to paint it would all far, far outweigh this.

So can we please drop this now? The performance impact of a recursive 
lookup vs. a lookup directly on the document is negligible, I cannot see 
it any other way.

>> If this would matter in any significant way, then you’d better remove 
>> all those subroutines from all your code 
>
> Inlining code is a common optimization technique, yes.  Usually done 
> by the compiler.  ;)

*sigh* I was expecting this answer. Do you bother inlining code in 
non-performance-critical code? Hopefully not, as the common side-effect 
is code size increase, which outweighs the very slight performance 
benefit by saving a jump easily.

>> and Firefox with xml:base in XHTML as well.
>
> Testcase?  That's not what I see over here.

What? You claimed this yourself, and one paragraph later in my response 
I verified that you were right, contrary to what I thought was the case 
earlier. Images are NOT automatically updated when the xml:base is 
changed, they are only updated when the src attribute is re-set. The 
testcase shows this as well. I think you misunderstood me here.


~Grauw

[1] RFC 2396 URI resolving implemented in Javascript. Running 10000x 
takes 0.5 seconds. With test cases. 
http://www.grauw.nl/etc/tech/resolve-uri.html

-- 
Ushiko-san! Kimi wa doushite, Ushiko-san nan da!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Laurens Holst, student, university of Utrecht, the Netherlands.
Website: www.grauw.nl. Backbase employee; www.backbase.com.

Received on Friday, 8 June 2007 00:14:59 UTC