W3C home > Mailing lists > Public > www-talk@w3.org > November to December 2001

discovering site metadata

From: Mark Nottingham <mnot@mnot.net>
Date: Wed, 5 Dec 2001 18:43:41 -0800
To: www-talk@w3.org
Message-ID: <20011205184340.F8676@mnot.net>

Mechanisms for discovering "site metadata" (statements about grouped
resources) like '/robots.txt' and '/w3c/p3p.xml' are considered bad
because they impose an external convention on someone's URI
namespace. I'm wondering if there have been any proposals for a
mechanism that is based on HTTP and URIs to solve this problem, and
if so, why they weren't adopted.

The obvious solution would be to use OPTIONS * along with an
appropriate accept header (or maybe something like AcceptNS?). Are
there any problems with this sort of approach? The only thing that I
can see is that OPTIONS isn't cacheable, but that doesn't completely
kill it as an, er, option. 

It seems that if there ever were a time to recommend a solution to
this, it would be now; P3P is winding its way to Recommendation, and
IIRC there have been some Web Services-related discovery protocols
that use a well-known location as well.

-- 
Mark Nottingham
http://www.mnot.net/
 
Received on Wednesday, 5 December 2001 21:43:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:27 GMT