W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Re: Are MGET descriptions workable/necessary?

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Sun, 23 Nov 2003 12:06:53 +0200
Cc: www-rdf-interest@w3.org
To: "ext Phil Dawes" <pdawes@users.sourceforge.net>
Message-Id: <C31E1A6C-1D9C-11D8-92B5-000A95EAFCEA@nokia.com>

On Saturday, Nov 22, 2003, at 01:21 Europe/Helsinki, ext Phil Dawes 

> Hi Patrick, Hi all,
> (This is where I reveal my ignorance)
> I've read through the rdfquery thread on rdf-interest, and have noted
> with interest the discussion about a new MGET http method and the
> distinction between representation and authoritative description.
> The bit I'm having problems with (aside from the whole idea of using
> http urls for persistent terms) is the requirement for each term
> author to maintain a web service describing all his/her terms *at the
> url it was defined at*.
> This sounds like an incredibly brittle mechanism to me. Surely an
> agent won't be able to rely on this facility being there.

It's no more brittle than the web is.

If you have a URI http://example.com/blargh and you want a 
of the resource denoted by that URI, you ask an HTTP server hosted at
example.com (which is presumed to exist) and usually, you'd GET back a

If you want a description of the resource denoted by that URI, you ask 
HTTP server hosted at example.com, and if that server is URIQA 
you'd MGET back a description.

This does not preclude the existence of any other service by which you
can obtain 3rd party descriptions about any resource -- just as one can
query many varied repositories about representations, such as google, 

If MGET is brittle. Then so is GET.

> My guess is that it will most likely have to have a backup mechanism 
> for
> discovering information about new terms. Probably something like using
> term brokers via a standardized rdf query interface (e.g. RDFQ), to
> locate other queryable resources for getting information about the
> term. (a la google for the conventional web)


> If this is the case, why bother with the MGET stuff at all? It seems
> like a lot of hassle for something you can't even rely on.

Because, in order to bootstrap the SW, there must be a standardized
protocol by which, having only a URI, one can obtain an authoritative
description of the resource denoted by that URI.

Just think how inefficient the web would exist if, for any given URL
you couldn't just do a


but you'd first have to know, or find out, the service which hosted that
resource, and say


The web would be *alot* less efficient -- if it would exist at all.

Why, then is it unreasonable for a SW agent to be able to simply ask


rather than have to know or find out some service at which a description
is hosted and ask



Though, note that for both representations and descriptions, the same
method of access


is valid and useful -- but simply not as the "atomic" protocol for
client/server interaction.

> Am I missing something?

Not much.



> Many thanks,
> Phil
Received on Sunday, 23 November 2003 05:09:48 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:48 UTC