W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2008

Re: NEW ISSUE(S): Retrying Requests

From: Jamie Lokier <jamie@shareable.org>
Date: Fri, 7 Mar 2008 02:33:56 +0000
To: Adrien de Croy <adrien@qbik.com>
Cc: Robert Siemer <Robert.Siemer-httpwg@backsla.sh>, Brian Smith <brian@briansmith.org>, 'HTTP Working Group' <ietf-http-wg@w3.org>
Message-ID: <20080307023355.GB24991@shareable.org>

> >Newer servers have TCP keepalive.
> Last time I looked at keepalive specs for TCP the sorts of timeouts 
> before keepalives were sent were in the order of hours.

Perhaps he meant _newer_ servers, for which the keepalive intervals
are adjustable.

> IOW not particularly useful, and definitely not useful for a highly 
> loaded HTTP server.

> * connection handles then. 
> * kernel resources,
> * Memory
> * firewall hash entries
> * authentication tokens
> * etc etc etc
> There are lots and lots of things you really don't want hanging around 
> forever for idle clients.

Heh.  That was one reason HTTP was _invented_: to avoid the problem of
tying up state for FTP connections.  HTTP was celebrated as "stateless".

It's interesting that people now regard FTP servers as "lightweight"
in comparison to HTTP.  Total turnaround.

I have had people beg me to install an FTP server so they don't have
the overhead of a modern HTTP server!

> As someone who has seen what happens if you don't clean up... it's 
> definitely necessary.

Ok.  And does anyone have any advice, for long a server _needs_ to
keep a persistent connection open?

As noted, if it closes too early, compared with deployed clients, it
can lead to unnecessary errors or ambiguous retryings.

So what sort of timeout is needed to work with the major clients?

-- Jamie
Received on Friday, 7 March 2008 02:34:07 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:45 UTC