- From: Roy T. Fielding <fielding@ebuilt.com>
 - Date: Wed, 10 Oct 2001 17:53:08 -0700
 - To: Jeffrey Mogul <mogul@pa.dec.com>
 - Cc: Matt Black <MBlack@Smart421.com>, "'http-wg@cuckoo.hpl.hp.com'" <http-wg@cuckoo.hpl.hp.com>
 
Jeff's summary was right on the mark.
> P.S.: My personal belief is that a well-implemented server with
> sufficient resources will try to keep a persistent connection
> open for as long as possible.  The specification, however, does
> not require this, and in many server architectures it seems to
> be difficult to maintain large numbers of idle connections without
> excessive resource consumption.
The right amount of time seems to be dependent on a number of factors,
including the nature of the server (origin/proxy/gateway), the type of
applications being used via the server, and the mix of content.  A good
study of HTTP timeouts on a general-purpose server can be found at
    http://www.inria.fr/rrrt/rr-3840.html
Those numbers are based on a traditional daemon listening to a socket
interface.  An HTTP server that is tightly integrated with the TCP
implementation can do better by managing available ports as potential
connections, resulting in significantly less overhead than a user process
watching for more requests on a standard TCP socket.  The socket interface
just isn't smart enough to efficiently handle adaptive timeouts on
high-scalability servers.  The OS needs to manage both TCP wait states
and the "between" states of connection-accept-til-first-data-segment and
connection-close-til-client-acks-last-data, since otherwise the amount
of time spent in those states prevents adaptive timeouts from making new
connections available fast enough.
In other words, a client cannot anticipate the nature of a server's
timeout, because HTTP allows it to be tuned according to the needs of
the server implementation.
....Roy
Received on Thursday, 11 October 2001 02:10:32 UTC