W3C home > Mailing lists > Public > www-talk@w3.org > July to August 2000

Robots & Persistent connexion, pipelining

From: Mihai Preda <Mihai.Preda@inria.fr>
Date: Tue, 22 Aug 2000 14:05:06 +0200 (CET)
To: www-talk@w3.org, www-server@w3.org
Message-ID: <Pine.LNX.4.10.10008221401170.21326-100000@galapagos.inria.fr>


I'd like to know what do you think about the use of persistent connexions
(HTTP/1.1) and pipelining by robots.
The persistent connexions were introduces to lower the web and server 
charge (TCP connexion charge). But their use by a robot conflicts with the
desiderate that the robot should not overload the server by rapid-fire.
For example, a robot should not ask a server more often than once
every 30secons for a page. Of course, this statement excludes the use of
persistent con. 
But, if I want to get 10 pages from a server, wouldn't it
be more advantageous (for the server too) to get the 10pages at once (with
persistent c.) and afterwards leave it in peace for 300sec(30*10) rather
than open a TCP con. and get a page every 30sec, 10 times?

First, I'd like to know what is the consensus in this matter.
Second, I'd like to know what do you think about this proposition, which
is aimed to allow the use of persistent c. by robots:

So, we are againt server-overload, and we allocate 30sec/page. But we
allow to get a limitted number of pages (say, 5 or 10) together through a
persistent con., with the condition to leave the server rest longer
afterwards. If we get 5 pages at once, we won't ask anything from the same
server for the next 30*5 seconds. What do you think?

Received on Tuesday, 22 August 2000 08:05:08 UTC

This archive was generated by hypermail 2.4.0 : Monday, 20 January 2020 16:08:24 UTC