- From: Bullard, Claude L (Len) <clbullar@ingr.com>
- Date: Wed, 19 Feb 2003 09:08:26 -0600
- To: www-tag@w3.org
That's a contract issue for the buyer of the server space and the vendor of the server space much like determining how much storage is provided for a given service option. It isn't an architectural issue, because as Paul says, the service provider can block either way. However, the user has to be aware that they have to contract for the right to be crawled. Many aspects of using the web successfully come down to the technical awareness of how the system works so one can get the services they need and pay for. Our local Internet service providers pride themselves on their ability to communicate that information to their customers. len -----Original Message----- From: Paul Prescod [mailto:paul@prescod.net] Patrick.Stickler@nokia.com wrote: > > .... > A specific question to help me determine that: If the server owner > says "no crawlers at all on this server" and a tenant says "all my > own content can be crawled", should the tenant's content be crawled? The server owner runs the HTTP server. No data can go in or out of the system without their explicit or implicit agreement. In particular, they can trivially block the MGET method or metadata header or whatever emerges in their Apache configuration.
Received on Wednesday, 19 February 2003 10:08:58 UTC