Re: robots.rdf

On Wed, 9 Jan 2002, Massimo Marchiori wrote:

> Incidentally, since you brought the P3P thingy in, the smart way would
> be instead to stick any RDF you want in its well-known location (that
> has been designed just to allow this), so browsers like IE6 etc
> will just munch the metadata with a single GET.

(Oh, fun, let's have an argument!)

The smart thing is to *not* use well-known locations, but to follow an age
old tradition: if you want to know about a web site, *read its homepage*.

It works for machines as well as for people. The WKL location hack may be
a justifiable hint in some contexts, but in general its a bad thing. It is
not for W3C, the IETF or anyone to tell me what my URIs mean. I've paid
money for domain names in exchange for the ability to deploy URIs with
those names in the Web. I don't want to find out, perhaps years later,
that some WG have decided they know what http://danbri.example.com/p3p/ or
http://danbri.example.com/rdf/ are to be used for. I'm wary of a trend
towards WKRs because they encourage a view that says Working Groups can
set URI naming conventions.

> Note this also relates to the sitemap thread (in fact, that's been one
> of the possible applications we had in mind).

Indeed it does. Being able to find a manifest or overview page for a site,
w/ pointers to associated web services, rss feeds, data dumps, site map
file(s), privacy statements etc etc is a worthy goal. But I'm having
trouble understanding the value of inventing WKRs beyond the published
home page URIs for these sites. Metadata could be embedded in the XHTML,
available by content negotiation, or linked to from home page. Or all three...

Dan


-- 
mailto:danbri@w3.org
http://www.w3.org/People/DanBri/

Received on Wednesday, 9 January 2002 15:24:30 UTC