discovering site metadata

Mechanisms for discovering "site metadata" (statements about grouped
resources) like '/robots.txt' and '/w3c/p3p.xml' are considered bad
because they impose an external convention on someone's URI
namespace. I'm wondering if there have been any proposals for a
mechanism that is based on HTTP and URIs to solve this problem, and
if so, why they weren't adopted.

The obvious solution would be to use OPTIONS * along with an
appropriate accept header (or maybe something like AcceptNS?). Are
there any problems with this sort of approach? The only thing that I
can see is that OPTIONS isn't cacheable, but that doesn't completely
kill it as an, er, option. 

It seems that if there ever were a time to recommend a solution to
this, it would be now; P3P is winding its way to Recommendation, and
IIRC there have been some Web Services-related discovery protocols
that use a well-known location as well.

-- 
Mark Nottingham
http://www.mnot.net/
 

Received on Wednesday, 5 December 2001 21:43:42 UTC