W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > May to August 1995

Re: HTTP Session Extension draft

From: Chuck Shotton <cshotton@biap.com>
Date: Thu, 6 Jul 1995 15:44:58 -0500
Message-Id: <v02120d12ac21fa732616@[]>
To: "Daniel W. Connolly" <connolly@beach.w3.org>
Cc: Jeffrey Mogul <mogul@pa.dec.com>, Alex Hopmann <hopmann@holonet.net>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
At 4:16 PM 7/6/95, Daniel W. Connolly wrote:
>In message <v02120d02ac21d6eccd15@[]>, Chuck Shotton writes:
>>The last thing I want to do with a resource-constrained server is
>>re-implement the nightmare of hundreds of blinking cursors in otherwise
>>idle telnet sessions. The HTTP protocol is primarily connectionless
>>(stateless) for reasons of efficiency from the server's perspective. I
>>think a persuasive argument can be made for keeping a stream open while all
>>of the required parts of a single "page" are transmitted. Allowing an
>>individual to monopolize a scarce resource for longer periods of time, on
>>the off chance that a human might select another link to your site from the
>>page he just received, IS irresponsible.
>Anybody from the MIT TechInfo project around? I read one of their
>papers one time -- or maybe it was just a random posting that sounded
>good. Anyway, they had a large body of real data that suggested some
>nearly optimal time limit for TCP connections in their information
>system.  20 seconds rings a bell.

Practical experience bears this out as well. You can watch server "queues"
back up substantially when transfer times on a Web server exceed about 20
or 30 seconds. There is a point where any server begins to thrash as new
connections arrive faster than old ones can be serviced. There is an odd
connection between transfer times (average file size) and how quickly this
thrashing begins, and it is based on the bandwidth available to the client
software. 100 14.4k PPP users will destroy a server much faster than 10
ethernetted clients even though the latter has a much higher bandwidth

>Clearly, this needs to be a policy set by the information provider.
>Many of them -- most or all in the near term -- will close the
>connection immediately after each request. But folks that want to
>optimize perceived performance will go for longer timeouts.  A
>heuristic like "less than 10 seconds will be a noticeable performance
>loss for your readers, and more than 50 (or 100) seconds will probably
>_not_ be a noticeable performance increase" is probably a good thing.

It's definitely the case that servers will have to get smarter as network
resources become more constrained. We can't expect everyone to have a T3
and an array of Alphas...

Chuck Shotton                               StarNine Technologies, Inc.
chuck@starnine.com                             http://www.starnine.com/
cshotton@biap.com                                  http://www.biap.com/
                 "Shut up and eat your vegetables!"
Received on Thursday, 6 July 1995 13:50:30 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 14:40:14 UTC