W3C home > Mailing lists > Public > public-rdf-dawg-comments@w3.org > January 2006

Re: URI serialization issues

From: Andreas Sewe <sewe@rbg.informatik.tu-darmstadt.de>
Date: Fri, 20 Jan 2006 11:30:13 +0100
Message-ID: <43D0BBB5.50005@rbg.informatik.tu-darmstadt.de>
To: dawg comments <public-rdf-dawg-comments@w3.org>

Mark Baker wrote:
> Kendall Clark wrote:

>> 2. How does the query coming in via POST instead of GET really help
>> anything? In both cases a service may send QueryRequestRefused in 
>> response to a request that's too expensive to complete.
> 
> It would help things in the sense that there wouldn't be a URI around
> that spiders, pre-fetchers, and other automata might "bombard" with 
> requests without understanding the cost to the publisher.  POST 
> doesn't work like that because it entails the client (potentially) 
> incurring a obligation ... which is why spiders don't use it (that
> and the fact that they don't know *what* to POST).

Well, if you do have these costly URIs laying around, wouldn't Robot
Exclusion aka robots.txt fit the bill?

Granted, this only works for benevolent spiders respecting robots.txt,
but that seems IMHO to be the simplest solution to your particular 
problem. Of course, "A Method for Web Robots Control" never got the 
IETF's official blessings -- but it is out there and it works.

Regards,

Andreas Sewe
Received on Tuesday, 24 January 2006 04:56:28 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:14:50 GMT