Re: Are MGET descriptions workable/necessary?

Hi Patrick,

Patrick Stickler writes:
 > 
 > [...]
 >
 > It's no more brittle than the web is.
 > 
 > If you have a URI http://example.com/blargh and you want a
 > representation of the resource denoted by that URI, you ask an HTTP
 > server hosted at example.com (which is presumed to exist) and
 > usually, you'd GET back a representation.
 > 
 > If you want a description of the resource denoted by that URI, you
 > ask the HTTP server hosted at example.com, and if that server is
 > URIQA enlightened, you'd MGET back a description.
 > 
 > [...]
 > 
 > If MGET is brittle. Then so is GET.

I agree that the mechanisms are the same. It's actually the social
burden on the term author that I'm not convinced about. The difference
is that a web page, being a non-authoritative representation, can be
moved around, 302'd, re-directed via an html link, update your
bookmarks please'd, and eventually retired.

An RDF term is forever. Just think about that. In my lifetime I'll be
creating probably millions of terms (already been responsible for
thousands in my work intranet). In order for this authoritative
description mechanism to work, I'll need to maintain an http service
for each of the terms I create *at the URL I mint it at!!!*

And so will *everyone* else!

I can't split off a term and give it to somebody else to maintain,
because it's tied to my domain name. 
That means I've got to deal with load, infrastructure, dns expiry
etc.. etc.. forever!

That makes it wholly unreliable in my opinion, and not the sort of
thing to be bootstrapping the SW with.

Best regards,

Phil

Received on Monday, 24 November 2003 12:31:30 UTC