- From: Tim Berners-Lee <timbl@w3.org>
- Date: Mon, 4 Mar 2002 15:44:11 -0500
- To: "TAG" <www-tag@w3.org>, "Tim Bray" <tbray@textuality.com>
IMHO ----- Original Message ----- From: "Tim Bray" <tbray@textuality.com> To: "TAG" <www-tag@w3.org> Sent: Thursday, February 28, 2002 2:26 PM Subject: Re: [namespaceDocument-8] 14 Theses, take 2 > At 06:54 PM 27/02/02 -0500, Tim Berners-Lee wrote: > > I'm with Tim on points 1-4. > > >5. Where all the information available can be expressed in one (not too > >long) document then an indirection for the sake of it is an engineering > >mistake. So clients should be prepared to accept information directly or > >indirectly, ideally. > > Here I disagree strongly. Indirection is cheap and its benefits > are high. We should create an expectation that in the normal case (a > plurality of definitive resources) the author will show responsibility > by providing a directory to them. And I continue to think that > the namespace with a single defining document is an architecturally > uninteresting corner case. We differ strongly then. Indirection is not cheap. HTTP was designed, largely, in order to half the number of TCP connections which had to be made to get hold of a document. Since then, huge amounts of effort in HTTP 1.1 have been made to reduce the number of connections which had to be set up. The problem with any redirection like that is that whatever the underlying protocol, you end up having to send packets across the world and back twice, and the speed of light limits how fast you can possibly be. So introducing a round trip just for flexibility isn't cheap. Content negotiation in HTTP specifically had to work in a single loop - which is why it does have accept headers and so on rather than just a return of a directory. P3P privacy negotiation was limited similarly because no one wanted to put a round trip onto the protocol. So I would argue that it isn't cheap, and really you are trying to argue that compared to a web page fetch, you aren't going to do it often. This may work when you come across a new web namespace every time the W3C produces a new recommendation for a markup language. This fits with your implicit scenario of the programmer sitting back in is chair and wondering, "what have we here, then?", pasting the namespace into his browser, reading around, and printing off the material for bedtime reading. And with this scenario, the fact that it is important, I agree, However, it does not support another scenario as follows. A semantic web query engine has been looking for a soc:Person who is soc:member of some:group. Reading a document it comes across the statement that w3c:tbray foo:paidupMember some:group. It has never heard of the foo: space before, but in a twinkling it has picked up the namespace document, dereferenced it (done an indirection or not) , found that foo:paidupMember is a subProperty of soc:member, with domain soc:Person, and deduced that indeed w3c:tbray must be a soc:Person and a soc:Member of some:group. This is rudimentary example of the value of the namespace document being machine-processable. It demonstrates the need for speed. I would also point out that in the semantic web, individual things as well as properties are identified by URI references and may be in namespaces, and so the machine might also need to dereference w3c:tbray to find out some information about the person. In traditional XML markup use, element and attribute types are special in that they are only occasionally defined, and those are defined rarely. But at a basic level there is no difference between the use of a URI reference with a QName and its use without as in an href. It is a reference, and I would hate to make all references which used the syntactic device of a qname have to pay by having double the number of roundtrips to dereference. So, if we can make it so that the document it picks up, by hook or by crook looks intelligible to the programmer too, can we do away with the redirection? Tim BL
Received on Monday, 4 March 2002 15:45:22 UTC