RE: robots.rdf

>> Incidentally, since you brought the P3P thingy in, the smart way would
>> be instead to stick any RDF you want in its well-known location (that
>> has been designed just to allow this), so browsers like IE6 etc
>> will just munch the metadata with a single GET.

Ok, so my suggestion was made with ignorance of P3P, but I'm not entirely
sure (after a miniscule bit of reading) how my suggestion differs from the
smart way you describe, except perhaps that robots.txt is probably a better
known location than /w3c/p3p.xml.

>(Oh, fun, let's have an argument!)

joy!

>The smart thing is to *not* use well-known locations, but to follow an age
>old tradition: if you want to know about a web site, *read its homepage*.
>
>It works for machines as well as for people. The WKL location hack may be
>a justifiable hint in some contexts, but in general its a bad thing. It is
>not for W3C, the IETF or anyone to tell me what my URIs mean. I've paid
>money for domain names in exchange for the ability to deploy URIs with
>those names in the Web. I don't want to find out, perhaps years later,
>that some WG have decided they know what http://danbri.example.com/p3p/ or
>http://danbri.example.com/rdf/ are to be used for. I'm wary of a trend
>towards WKRs because they encourage a view that says Working Groups can
>set URI naming conventions.

Unlike other syntax, transfer and encoding protocols..?

>> Note this also relates to the sitemap thread (in fact, that's been one
>> of the possible applications we had in mind).
>
>Indeed it does. Being able to find a manifest or overview page for a site,
>w/ pointers to associated web services, rss feeds, data dumps, site map
>file(s), privacy statements etc etc is a worthy goal. But I'm having
>trouble understanding the value of inventing WKRs beyond the published
>home page URIs for these sites. Metadata could be embedded in the XHTML,
>available by content negotiation, or linked to from home page. Or
>all three...

..or any other number of umpteen ways that the agent would have to be aware
of. I am somewhat in agreement with the
it's-my-uri-and-I'll-do-want-I-want-to point (though
it's-my-markup-and-I'll-make-it-.NET can't be far behind), but we can use
existing techniques with the content of the pages using <H1> tags for
keywords or whatever if we so wish - the issue is to get something that
allows metadata to be accessible so the content provider can easily provide
it, and the agent can easily read it.

Ok, it's preferable to use the homepage as the WKL - but that page is
usually going to be (X)HTML, and (correct me if I'm wrong) there doesn't
seem to be any progress towards even guidelines for incorporating the RDF.
Perhaps P3P/robots.rdf style WKLs might help getting the RDF revs up, once
everyone's past the starting post it won't be an issue.

mount /highhorse
If RDF is going to be the key to the SW then it's about time there was more
action at the front end (dodgy HTML 3.2 world) as well as back end (model)
tinkering.

Cheers,
Danny.

Received on Wednesday, 9 January 2002 18:38:13 UTC