RE: A new proposal (was: Re: which layer for URI processing?)

Tim Berners-Lee wrote:
...
>
> No, this is not such an example.  The chemical plant did not
> blow up because of a "fragile" base URI.  It blew up because the
> base URI which was clear to all parties was not used to absolutize the
> relative URI. It blew up because the definitions of idenity to the "upper
> layers" and the "lower layers" were different.  It refutes the
> argument that
> the comparisons can be done differently by different layers.

But as Paul Grosso notes, the URIs:

http://example.com/./detonator and
http://example.com/detonator

will typically refer to the same resource despite the fact that the 2 URIs
are not identical, so the fact that a relative URI is 'absolutized' on the
client or an absolute URI is normalized on a server doesn't change the fact
that there is a binding process which associates a URI with a resource.

To take this to its logical conclusion, if you are to require relative NS
absolutization, you ought then require absolute URI normalization ... and
will need to define server behavior in order to make this work.

This problem it seems is an intrinsic problem with URIs as defined.

Actually it goes further than that:

Suppose the DNS entries for example.com and another.com point to the same IP
address, in this case
http://another.com/detonator is just as dangerous.

You would need to define a normalization process that performs a DNS lookup
as well... But suppose the server at the URI http://yetAnother.com/flower
redirects to http://example.com/detonator ... and on and on ... in reality
this is a difficult problem and it seems can only be solved by an
understanding of the semantics of the particular URI.

Is there a class of problems caused by relative URIs that isn't also caused
by un-normalized URIs?

Jonathan Borden

Received on Thursday, 25 May 2000 00:23:32 UTC