W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: p1 7.2.4: retrying requests

From: Willy Tarreau <w@1wt.eu>
Date: Tue, 7 Jun 2011 08:39:53 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20110607063953.GD2195@1wt.eu>
On Tue, Jun 07, 2011 at 04:08:52PM +1000, Mark Nottingham wrote:
> > I'm not sure what those sort of agreement means, and in fact I'm not even
> > sure that we have solid ways to cover all situations. Still, some products
> > already do that (possibly at some risks, I don't know). I regularly get
> > strong requests from users to implement the same features in haproxy, and
> > while I agree it makes sense to implement those, my reading of the spec
> > does not make me 100% comfortable with it, because I think that at some
> > times we have to ignore some important rules.
> Well, a gateway / reverse proxy (like haproxy) is implicitly a device that has an agreement with the server; the protocol it speaks can be HTTP/1.1, but because it's a gateway, it can be something else too. We need to make this more apparent generally, I think.

OK, I agree on this point.

> > My real concern is to ensure that such an implementation does not lie to
> > the user, and for instance that if such a gateway is installed in front
> > of a server and it decides to automatically retry a request over a new
> > connection, the visitor will not receive two products in his mail box
> > and will not have is account debitted twice.
> Right. If it's vanilla HTTP, and the request is a POST, it can't retry it.

That was my understanding too, hence the Expect tricks I suggested.

> If it's not HTTP (or HTTP++), it has some other kind of agreement covering this situation.

Let's only focus on HTTP since it's what we're interested in here, and where
the difficulty appears.

> > The only "reliable" solution I have against this is for the gateway to
> > break the client connection if the server connection dies, and let the
> > client retry. While this can work for the 2nd and subsequent requests
> > from the client, if it does this for the first request of the client's
> > connection, the client will display an error and will not retry. So as
> > you see, a gray area remains (at least for me) here.
> Why is it different for the first request?

Because RFC2616 expects a client to retry a request over a keep-alive
connection that has just died, and my observations is that various browsers
get it right. However, when it's the first request over this connection,
they know that the closed connection is not a keep-alive timeout and it
indicates a server error. As such, all browsers I have tested immediately
report a connection error if the connection is broken during the first

For this reason it makes it hard to multiplex incoming requests over
existing connections, because in theory a POST will always need a fresh
new connection to the server, or will have to silently retry (forbidden)
or to make use of some Expect tricks as I described in the first mail.
Maybe there are other cleaner ways but I haven't found them right now.

And the fact that I see this problem as complex while most people I talk
to just shake their shoulders saying things starting with "bah you just
have to ..." makes me think that it's not necessarily obvious to get the
various corner cases from a quick reading of the spec (or that I'm really
dumb, that's possible too).

Received on Tuesday, 7 June 2011 06:40:20 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:52 UTC