Re: Site metadata; my preference

----- Original Message -----
From: <>
Sent: Wednesday, February 12, 2003 1:08 PM
Subject: RE: Site metadata; my preference

Mark Baker:
>> My preference would be for an optional response header, "Metadata" or
> > some such, returned via GET and HEAD.
> Fair enough, but this is inefficient, as it requires two system
> calls to get metadata,
> and requires the doubling of URIs on the Web,
> one to denote resources and one to denote its metadata.

This is a crazy argument!   I assume you were serious.
The URIs don't have any overhead on eth server, as the
resources are virtual documents generated in response to
a get in just the same way as you would generate  a response to
your MGET.  On the client, who asked for the information anyway,
there ois no extra overhead.  On proxy servers, you do have the
possibility of caching this data. But hat is a good thing, and just
one of te reasons for using GET, that you reuse the who proxy and caching
of the reasons for using

> And because of the burden of minting new URIs, it encourages
> the creation of monolithic schemas which describe large numbers
> of resources, and therefore can easily cause a new form of
> infoglut for semantic web agents.

There will be no "burden" of minting new URIs that I can see.

> The semantic web architecture should provide for a formal
> definition of concise, bounded descriptions of specific
> resources which are accessible by the URI that denotes the
> resource in question.

I think actually htere is a falacy in teh thinking there - that
the information about one thing will naturally be
well bounded, and everyone will want the same information.
If taht were the case, then you could put it into headers in HEAD
response.  (This is in fact a possbaility too, to put
RDF instaed just a URI into a header).

But in practice, a server has a huge amount of information it
*could* give: Privacy and IPR information, persistence information,
change history, acces control information, access control history,
schema validity, workflow state and robot control are some things which come
to mind but that's just thinking about

> > I don't like MGET for the reasons explained in the TAG finding on
> > "URIs, Addressability, and the use of HTTP GET";
> >
> >   "Safe operations (read, query, view, ask, lookup, etc.) on HTTP
> >    resources SHOULD be implemented using GET because that allows the
> >    result documents to be identified by URI, while using POST
> > does not."
> >     --
> But surely this is a completely disjunct issue.
> MGET does identify resources by URI, the resources being described,
> and servers are free to return a URI denoting the body of knowledge
> returned, if they see fit.
> The above finding deals with the behavior of the Web, not the
> behavior of the Semantic Web.

Evry finding which deals with eth Web deals with the semantic Web,
just as every finding whcih deals with the Internet deals with the Web.
One is built using the other, not in contrast to it.

> All the more reason to have distinct verbs to capture Semantic
> Web behavior.
> > I don't like GET+Meta because I feel it violates a good practice
> > suggestion of Webarch;
> >
> >   "Consistent representations: It is confusing and costly when, for a
> >    given URI, representations vary in unpredictable ways."
> >     --
> I agree.
> Though, descriptions are not representations -- and for this reason,
> I think it's not quite kosher to use GET to provide descriptions
> (or anything other than representations).

You are making a two-level system, of resources and "descriptions".
 This is, if you don't mind my saying so,
a common temptation in global system design, but an error.
A one-level recursive system is more powerful and general.
It is the same type of error as the proposal that schemas should not
be in the web.  I wrote "Dictionaries in the library?"
to try to make that point.  Reading "Goedel, Escher Bach" is better.

> Cheers,
> Patrick


Received on Saturday, 15 February 2003 17:33:31 UTC