RE: Proposed issue: site metadata hook

> -----Original Message-----
> From: ext Tim Berners-Lee []
> Sent: 11 February, 2003 15:15
> To: Stickler Patrick (NMP/Tampere)
> Cc:;
> Subject: Re: Proposed issue: site metadata hook
> And what, then, is the URI of the URI of the information about the 
> resource?
> (do we have MMGET to get metadata about that?)

Certanly not.

Surely you are not suggesting that there cannot exist resources
without names? ;-)

The body of knowledge returned by MGET *is* a resource, but
one need not say anything about it or give it a name if one
does not need to (and I expect most folks won't).

But if you *do* want to, you can return the URI of that
body of knowledge in the HTTP headers using a tag such as
URI: a'la Apache (though with absolute rather than relative
URIs), or something akin to it, as is commonly done in content 
negotiation to identify the particular entity (variant) being 
returned, to differentiate it from the resource denoted in the 
HTTP request.

Then, if you like, and if any knowledge is defined about it,
you can do an MGET on that URI, and get back, along with the
results, yet another URI denoting *that* body of knowledge, 
and iterate ad nauseum until bored.

One simple way to implement this recursive naming is by
means of a standardized URI infix, so that when MGET is
called on

the RDF/XML returned contains statements describing the
resource denoted by and that entity
returned by HTTP has a URI of

which denotes the body of knowledge available via MGET from
that particular server about the resource

If one were to do a GET on, one
would obtain the same RDF/XML encoded knowledge as one would
get by doing an MGET on

HOWEVER, if one does an MGET on
one gets a description about the body of knowledge known by
the server about, with a URI

Thus, the URI of the body of knowledge about the body of
knowledge about the body of knowledge about the body of
knowledge about the body of knowledge known about the
resource denoted by is, etc...


Downright silly, to be sure, but simple nonetheless and
it will work just fine, thank you very much.

However, I suspect you are mostly thinking about documents 
describing resources, and the URIs of those documents. Well,
that's what rdfs:isDefinedBy is for. No?

And the content of the body of knowledge about a resource
known to a server may very well be the syndication of
knowledge from multiple documents and/or other servers.

If you have a document that expresses knowledge about a
resource, and a server which supports MGET which has syndicated
knowledge from that document, or at least is aware of the
descriptive nature of that document, one would expect that the
knowledge returned by MGET would include an rdfs:isDefinedBy 
statement indicating the URI of the document from which
the knowledge originated.

Then, the agent would GET the specified document and
do what it needs with it.

And if one wanted knowledge about the document, well 
just MGET it using the very same URI of the document.

It's really devilishly simple.

And of course, the URI of the body of knowledge known about a 
web site would be ;-)

Furthermore, a server is free to specify whatever URI it
likes to denote the body of knowledge returned by MGET. It
need not use anything like the above MGET infix. It might
just use UUIDs and track returned entities per session.
Whatever. It really doesn't matter. Though, of course, the
MGET infix is more in line with "Cool URIs don't change" ;-)

In fact, if that body of knowledge is actually taken from a
specific document, say then one
would expect the server to provide the URI of the entity
returned from an MGET to, since that's 
the actual resource being returned.


So, I hope it is now clear that here really is no kind
of MMMMMMMGET problem.



> Tim
> PS: This is the problem with PROPFIND.
> On Tuesday, Feb 11, 2003, at 03:29 US/Pacific, 
> wrote:
> >
> > MGET solves this problem (and many others).
> >
> > A web site is just another resource. Let's agree that the
> > <scheme>://<authority> portion of a URI denotes a web site
> > (I don't think that will all that controversial).
> >
> > So <> denotes a web site.
> >
> > And, 'MGET' will return an RDF/XML instance 
> > providing
> > the description of that site.
> >
> > There is no need to require a metadata file with separate 
> identity nor
> > two calls to the server to get the required information (first a GET
> > or HEAD to get the metadata file URI and then a GET to get 
> the file).
> >
> > A single call of MGET does the job.
> >
> > And MGET also solves numerous other problems, such as those 
> addressed
> > by RDDL as well as general access to resource metadata via 
> their URIs.
> >
> > And MGET allows all the confusion about XML Namespaces to simply
> > be tossed aside, since MGET deals with full URIs and one can then
> > inspect the knowledge defined about each individual term 
> irregardless
> > of whatever namespace was used as punctuation in some XML 
> > serialization.
> >
> > A URI denotes a resource.
> > Use GET to get a representation of the resource.
> > Use MGET to get knowledge about the resource.
> >
> > Browsing the semantic web then is analogous to browsing the web,
> > but using MGET rather than GET. Like two sides of the same coin,
> > and HTTP is the coin.
> >
> > Simple.
> >
> > I'm working on having a demonstration of MGET and friends by the
> > technical plenary...
> >
> > Cheers,
> >
> > Patrick
> >
> >
> >> -----Original Message-----
> >> From: ext Tim Berners-Lee []
> >> Sent: 10 February, 2003 18:02
> >> To:
> >> Cc:
> >> Subject: Proposed issue: site metadata hook
> >>
> >>
> >>
> >> In the face-face meeting I took an action to write up a 
> proposal for
> >> the following potential issue:
> >>
> >>
> >> Proposed Short name:  SiteMetadata-nn
> >>
> >> Title:   Web site metadata improving on robots.txt, w3c/p3p
> >> and favicon
> >> etc
> >>
> >> The architecture of the web is that the space of identifiers
> >> on an http web site is owned by the owner of the domain name.
> >> The owner, "publisher",  is free to allocate identifiers
> >> and define how they are served.
> >>
> >> Any variation from this breaks the web.  The problem
> >> is that there are some conventions for the identifies on websites,
> >> that
> >>
> >>     /robots.txt  is a file controlling robot access
> >>     /w3c/p3p is where you put a privacy policy
> >>     /favico   is an icon representative of the web site
> >>
> >> and who knows what others.  There is of course no
> >> list available of the assumptions different groups and 
> manufacturers
> >> have used.
> >>
> >> These break the rule.  If you put a file which happens to be
> >> called robots.txt  but has something else in, then weird
> >> things happen.
> >> One might think that this is unlikely, now, but the situation could
> >> get a lot worse.  It is disturbing that a
> >> precedent has been set and the number of these may increase.
> >>
> >> There are other problems as well - as well sites are catalogued
> >> by a number of different agents, there tend to be all kinds
> >> or request for things like the above, while one would like to
> >> be able to pick such things up as quickly as possible.
> >>
> >> If, when these features were designed, there had been a
> >> general way of attaching metadata to a web site, it would
> >> not have been necessary.
> >>
> >> The TAG should address this issue and find a solution,
> >> or put in place steps for a solution to be found,
> >> which allows the metadata about a site, including that for
> >> later applications, to be found with the minimum overhead
> >> and no use of reserved URIs within the server space.
> >>
> >> Example solution for feasability
> >>
> >> A new http tag such as "Metadata:" is introduced into HTTP
> >> This takes one parameter, which is the URI of the
> >> metadata document.  The header is supplied on response to
> >> any GET or HEAD of the root document  ("/"). It may also
> >> be supplied on a any other request, including error
> >> requests.
> >>
> >> The Metadata document is conventionally written in RDF/XML.
> >> It contains pointers to all kinds of standard and/or proprietary
> >> metadata about the site, including for example
> >>
> >> - privacy policy
> >> - robot control
> >> - icon for representing the site
> >> - site maps
> >> - syndicates (RSS ) feeds
> >> - IPR information
> >> - site policy
> >> - site owners
> >>
> >> The solution only needs to document the hook and the
> >> vocabulary to point to metadata resources in current
> >> use.  Vocabulary for new applications can be defined
> >> by those applications.
> >>
> >> timbl
> >>
> >>

Received on Tuesday, 11 February 2003 11:10:27 UTC