W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: NEW ISSUE(S): Retrying Requests

From: Jamie Lokier <jamie@shareable.org>
Date: Fri, 7 Mar 2008 02:29:28 +0000
To: Robert Siemer <Robert.Siemer-httpwg@backsla.sh>
Cc: Brian Smith <brian@briansmith.org>, 'HTTP Working Group' <ietf-http-wg@w3.org>
Message-ID: <20080307022928.GA24991@shareable.org>

Robert Siemer wrote:
> On Thu, Mar 06, 2008 at 08:22:13PM +0000, Jamie Lokier wrote:
> > 
> > Robert Siemer wrote:
> > > On Thu, Mar 06, 2008 at 12:33:45AM +0000, Jamie Lokier wrote:
> > > 
> > > > I'm still puzzled as to when a client should reuse a persistent
> > > > connection for requests that (it knows) shouldn't be retried.
> > > > 
> > > > Since all servers close a persistent connection an unspecified time
> > > > after the first request, and that's perfectly healthy (all servers
> > > > must do it), ...
> > > 
> > > Why must all servers do that?
> > 
> > Two reasons:
> > 
> >    1. Because idle TCP/IP sockets get into a stuck state, if the other
> >       end disappears off the net.  Especially on internet facing
> >       servers, these tend to accumulate without bound (older servers
> >       had to be rebooted from time to time because of this).
> Newer servers have TCP keepalive.

Yes.  The server has to choose a TCP keepalive timeout.  For some
reason I had a brain fart and thought it was the same problem (of
matching timeouts) in another guise, but no, you're right.  The server
does have to choose a timeout, but it's only to detect connectivity
failure, and doesn't have to match any client application behaviour.

> >       Servers will accumulate these until they run out of ports or
> >       memory, if they don't have some way to drop old, idle connections.
> The server runs out of port 80? What ports are you talking about?

Sorry, I meant sockets.  Some systems run out of TCP sockets (fixed
table in the kernel), and many unix systems have a limited number of
file descriptors that the server application can keep open.  Even if
the limit is 10000 sockets, it's easily reached if you keep
connections open "forever".

> Apart from that you just switched from "persistent connections" to "old, 
> idle connections". TCP connections don't grow old. Servers don't need to 
> close connections in use - be it after the 100th or 100000th request or 
> be it after a "long" time.

By old, I meant one that was idle for a long time, needing the
application to close it.  As you rightly say, TCP itself will maintain
a connection forever if there is no traffic.

> >    1. To defend against too many clients keeping connections open for
> >       an arbitrarily long time, whether maliciously or too popular.
> You give your second reason the same number (1.), because it is actually 
> the same? ;-)   Apart from memory consumption is there anything else to 
> defend?

I'm seeing your point.  Still, even a fancy event-driven server on
Solaris or Linux will hit limits from time to time (not memory, but
resources like file descriptors).

It does depend on having a newer TCP stack, though.  TCP keepalive
isn't available in some of the old ones, and many older systems cannot
handle large numbers of sockets.

Last time I looked at Apache's code, it had a timeout on idle
connections.  I presume this is mainly to avoid having thousands of
idle sockets around, and because it was more of a problem in the past,
with older systems, and possibly lacking TCP keepalive.

The implication from what you're saying is that modern systems aren't
bothered, so it's fine to leave connections open for a "long time"
whatever that may mean, and with TCP keepalive.

I'm not sure how happy I am to rely on that as a strategy.

-- Jamie
Received on Friday, 7 March 2008 02:29:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:45 UTC