W3C home > Mailing lists > Public > www-rdf-interest@w3.org > January 2002

Re: robots.rdf

From: Massimo Marchiori <massimo@w3.org>
Date: Wed, 9 Jan 2002 13:46:34 -0500
Message-Id: <200201091846.NAA10473@tux.w3.org>
To: www-rdf-interest@w3.org

I was wondering if the robots exclusion protocol
could be leveraged to make alife easier for rdf-aware agents, in a way that
would be a lot less effort than going for something like full-blown P3P.
Less effort than P3P??

There are two ways that I am aware of the protocol being used at present -
either in a metatag (e.g. <META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">)
or in a robots.txt file in the root directory of the server. The former
couldn't really add much of value to the existing situation, but the latter
might have a lot of potential. If robots.txt contained information
specifically aimed at rdf agents, then a lot of the ad hoc link
following/metatag scrunching that might otherwise have to be employed by
these agents wouldn't be necessary.
Incidentally, since you brought the P3P thingy in, the smart way would
be instead to stick any RDF you want in its well-known location (that
has been designed just to allow this), so browsers like IE6 etc 
will just munch the metadata with a single GET.

Note this also relates to the sitemap thread (in fact, that's been one
of the possible applications we had in mind).

Received on Wednesday, 9 January 2002 13:46:34 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:07:39 UTC