- From: Phil Dawes <pdawes@users.sourceforge.net>
- Date: Fri, 23 Jan 2004 04:28:02 +0000
- To: Patrick Stickler <patrick.stickler@nokia.com>
- Cc: "ext Hammond, Tony (ELSLON)" <T.Hammond@elsevier.com>, ext Sandro Hawke <sandro@w3.org>, "Thomas B. Passin" <tpassin@comcast.net>, ext Jeremy Carroll <jjc@hplb.hpl.hp.com>, www-rdf-interest@w3.org
Hi Patrick, Patrick Stickler writes: > > http: based PURLs work just fine. As I've pointed out before, you > can accomplish all that you aim to accomplish with the info: URI > scheme by simply using http: URIs grounded in your top level > domain, delegating control of subtrees of that namespace to the > various managing entities per each subscheme (the same is true > of urn: URIs). Then each http: URI can be associated with an > alias to which it redirects, as well as allow for access to > metadata descriptions via solutions such as URIQA[1]. E.g. > rather than > > info:lccn/n78890351 > > you'd have > > http://info-uri.info/lccn/n78890351 > But if these are non-dereferencible URIs, how do you stop every RDF web-crawler, information gatherer and clueless agent on the planet from attempting to HTTP-GET/MGET the billions of URIs in the namespace? Unless I'm missing something, as the number of these scale up, so to do does the amount of resources used in tackling 404'd requests. The only solution I can think of is to invent a dud subdomain (that doesnt exist) and let the DNS infrastructure deal with the 'doesn't exist' load (which it's much better placed to do). But then if you are going to do that, why not just invent a non-dereferencible URI scheme... Doh! Cheers, Phil
Received on Friday, 23 January 2004 14:34:57 UTC