W3C home > Mailing lists > Public > www-rdf-interest@w3.org > November 2003

Are MGET descriptions workable/necessary?

From: Phil Dawes <pdawes@users.sourceforge.net>
Date: Fri, 21 Nov 2003 23:21:06 +0000
Message-ID: <16318.40418.975292.453070@gargle.gargle.HOWL>
To: Patrick Stickler <patrick.stickler@nokia.com>
Cc: www-rdf-interest@w3.org

Hi Patrick, Hi all,

(This is where I reveal my ignorance)

I've read through the rdfquery thread on rdf-interest, and have noted
with interest the discussion about a new MGET http method and the
distinction between representation and authoritative description.

The bit I'm having problems with (aside from the whole idea of using
http urls for persistent terms) is the requirement for each term
author to maintain a web service describing all his/her terms *at the
url it was defined at*.

This sounds like an incredibly brittle mechanism to me. Surely an
agent won't be able to rely on this facility being there.

My guess is that it will most likely have to have a backup mechanism for
discovering information about new terms. Probably something like using
term brokers via a standardized rdf query interface (e.g. RDFQ), to
locate other queryable resources for getting information about the
term. (a la google for the conventional web)

If this is the case, why bother with the MGET stuff at all? It seems
like a lot of hassle for something you can't even rely on.

Am I missing something?

Many thanks,

Received on Friday, 21 November 2003 18:28:32 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:48 UTC