- From: Patrick Stickler <patrick.stickler@nokia.com>
- Date: Sun, 23 Nov 2003 12:06:53 +0200
- To: "ext Phil Dawes" <pdawes@users.sourceforge.net>
- Cc: www-rdf-interest@w3.org
On Saturday, Nov 22, 2003, at 01:21 Europe/Helsinki, ext Phil Dawes
wrote:
> Hi Patrick, Hi all,
>
> (This is where I reveal my ignorance)
>
> I've read through the rdfquery thread on rdf-interest, and have noted
> with interest the discussion about a new MGET http method and the
> distinction between representation and authoritative description.
>
> The bit I'm having problems with (aside from the whole idea of using
> http urls for persistent terms) is the requirement for each term
> author to maintain a web service describing all his/her terms *at the
> url it was defined at*.
>
> This sounds like an incredibly brittle mechanism to me. Surely an
> agent won't be able to rely on this facility being there.
>
It's no more brittle than the web is.
If you have a URI http://example.com/blargh and you want a
representation
of the resource denoted by that URI, you ask an HTTP server hosted at
example.com (which is presumed to exist) and usually, you'd GET back a
representation.
If you want a description of the resource denoted by that URI, you ask
the
HTTP server hosted at example.com, and if that server is URIQA
enlightened,
you'd MGET back a description.
This does not preclude the existence of any other service by which you
can obtain 3rd party descriptions about any resource -- just as one can
query many varied repositories about representations, such as google,
etc.
If MGET is brittle. Then so is GET.
> My guess is that it will most likely have to have a backup mechanism
> for
> discovering information about new terms. Probably something like using
> term brokers via a standardized rdf query interface (e.g. RDFQ), to
> locate other queryable resources for getting information about the
> term. (a la google for the conventional web)
>
Exactly.
> If this is the case, why bother with the MGET stuff at all? It seems
> like a lot of hassle for something you can't even rely on.
Because, in order to bootstrap the SW, there must be a standardized
protocol by which, having only a URI, one can obtain an authoritative
description of the resource denoted by that URI.
Just think how inefficient the web would exist if, for any given URL
you couldn't just do a
GET {URL} HTTP/1.1
but you'd first have to know, or find out, the service which hosted that
resource, and say
GET {SERVICE} {URL} HTTP/1.1
The web would be *alot* less efficient -- if it would exist at all.
Why, then is it unreasonable for a SW agent to be able to simply ask
MGET {URL} HTTP/1.1
rather than have to know or find out some service at which a description
is hosted and ask
GET {SERVICE} {URL} HTTP/1.1
???
Though, note that for both representations and descriptions, the same
method of access
GET {SERVICE} {URL} HTTP/1.1
is valid and useful -- but simply not as the "atomic" protocol for
client/server interaction.
> Am I missing something?
>
Not much.
Cheers,
Patrick
> Many thanks,
>
> Phil
>
Received on Sunday, 23 November 2003 05:09:48 UTC