- From: Chuck Shotton <cshotton@biap.com>
- Date: Thu, 6 Jul 1995 13:53:31 -0500
- To: Simon Spero <ses@tipper.oit.unc.edu>, Jeffrey Mogul <mogul@pa.dec.com>
- Cc: Alex Hopmann <hopmann@holonet.net>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
At 11:15 AM 7/6/95, Simon Spero wrote: >Most systems I use have little difficulty with 1000s of open connections, >as long as most are idle and the window sizes are chosen appropriately. About 30% of the servers on the net today can't even come close to this. This is a chronic problem with software that originates on the Unix side of the house, then migrates to other platforms. People assume that wasteful, inefficient implementations will be accomodated by the O/S. This is not true on many platforms and to design a protocol that assumes the O/S will accomodate lazy programmers is not responsive to the needs of a large number of users. Not all machines have gigabytes of swap space or thousands of IP connections to waste. >Keeping connections alive beyond the 100 seconds may not immediately give >a better hit-rate for trace loads- however if caching with active >invalidation is used, having connections open makes it much easier to >keep things up to date. This is so far away from the original intent of the HTTP protocol as to almost be unrecognizable. The server is supposed to be the passive entity in all this, with the clients actively requesting documents, maintaining state info, and driving the transactions. I recognize that there is a need for server-initiated communications. I'm just not convinced that the whole HTTP protocol needs to be turned on its ear to do it, and it certainly isn't sufficient justification to say it'll work OK on Unix. --_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Chuck Shotton StarNine Technologies, Inc. chuck@starnine.com http://www.starnine.com/ cshotton@biap.com http://www.biap.com/ "Shut up and eat your vegetables!"
Received on Thursday, 6 July 1995 11:56:02 UTC