Re: Proposed issue: site metadata hook

Having read all of the threads from this (some of which I admit to
propogating), I figured I'd go back and read the original post again.  Upon
a second reading, here are my thoughts...

Any such hook might need to keep a few things in mind (imho):

1) In the case of /robots.txt, /w3c/p3p, and /favico, these can be easily
maintained by even the least experienced person just by copying the
appropriate file to the appropriate location.  That's it.  No other files,
headers, server settings, etc. need to be touched.  Requiring people to do
any more than this seems like an uphill battle.

2) In the case of robots.txt, any hook that provides an added level of
indirection will likely not be adopted.  For instance, if GoogleBot has to
issue a HEAD /, then follow a URI (returned in the header) to get back an
RDF document, then parse the document to find the location of the robots.txt
file, then turn around and do this for every other site on the web it
indexes, I'm guessing Google would continue on with the /robots.txt file.
Having a browser follow the same steps may not be as bad for p3p and favico,
since the burden is more likely on the server than the client (in the form
of extra traffic).

3) How much trouble is this causing right now?  In theory, it makes sense
that the owner of a domain should have full control over his identifiers and
the resource(s) they point to.  In practice, though, how many people have
had issues with this, especially compared to the number that haven't had an
issue?

I haven't been on this list long and therefore don't expect my words will
have much sway, but I'd rather see the TAG continue to put its energy into
more pressing issues.

---
Seairth Jacobs
seairth@seairth.com

Received on Wednesday, 12 February 2003 21:40:34 UTC