Re: robots.rdf

Hi

These was a discussion about this kind of thing on the syndication list [1], I
suggested something like this in (X)HTML files:

<link rel="syndication"
  href="rss100headline.rdf" type="application/xml"
  title="RSS 1.0 Headlines" hreflang="en-gb" />

<link rel="alternate syndication"
 href="rss091headline.rdf" type="application/xml"
 title="RSS 0.91 Headlines" hreflang="en-gb" />

<link rel="alternate syndication"
  href="rss090headline.rdf" type="application/xml"
  title="RSS 0.9 Headlines" hreflang="en-gb" />

There is also the practice of using rel="meta" to reference a metadata
file for a specific document.

Perhaps a rel="meta index" or something that linked to a file that listed
all the RDF metadata files for a site? This file could be in quite a
simple format like RSS 1.0?

I agree that a standard for doing this kind of thing is needed.

Chris

[1] http://groups.yahoo.com/group/syndication/ 

On Mon 07-Jan-2002 at 11:03:45AM +0100, Danny Ayers wrote:
> With respect to the various approaches for embedding or linking rdf
> data from pages on the web, I was wondering if the robots exclusion
> protocol could be leveraged to make alife easier for rdf-aware agents,
> in a way that would be a lot less effort than going for something like
> full-blown P3P.  There are two ways that I am aware of the protocol
> being used at present - either in a metatag (e.g. <META NAME="ROBOTS"
> CONTENT="NOINDEX, NOFOLLOW">) or in a robots.txt file in the root
> directory of the server. 

-- 
Chris Croome                               <chris@webarchitects.co.uk>
web design                             http://www.webarchitects.co.uk/ 
web content management                               http://mkdoc.com/   
everything else                               http://chris.croome.net/  

Received on Tuesday, 8 January 2002 10:08:24 UTC