Re: p1 7.2.4: retrying requests

Hi Mark,

sorry for the delay.

On Sun, Jun 05, 2011 at 11:06:51AM +1000, Mark Nottingham wrote:
> On 04/06/2011, at 3:32 PM, Willy Tarreau wrote:
> > If a connection dies during
> > an idempotent request, it's easy to retry it. POSTs are also sent over those
> > connections, at the risk of loosing them and having to retry them. In my
> > opinion, adding an "Expect: 100-Continue" header to those requests is enough
> > to ensure they are sent over a valid connection. But the issue remains when
> > there is an empty body with the POST, because if we set the Expect header
> > with the empty body, we'll cause a deadlock (or the server might notice it
> > and proceed anyway).
> > 
> > So for empty POST requests, we still have no means of testing the connection
> > before reusing it. Or maybe by using chunked encoding and sending the 0<CRLF>
> > after 100 is received, provided the server accepts chunked encoded requests ?
> > 
> > In fact, connection pooling becomes so much common nowadays that I think we
> > should ensure any implementation gets all corner cases right than just saying
> > they can't replay a POST over a broken connection, because they'll do stupid
> > or dangerous things to get it working anyway (and if you knew the number of
> > people I encounter who are amazed that a POST must not be blindly replayed...).
> I don't think we can generalise your pattern using Expect/continue, because clients can't always be sure that the next hop is 1.1.

Indeed, I agree on this point, though I was assuming that products which
already aggregate connections are on the server side and definitely expect
the next hop to be 1.1. And I have no trouble fixing this as granted as it
can be a prerequisite for the feature to be enable and it can be checked in
previous responses.

> However, we could make it clear that it's OK for clients to retry non-idempotent requests when they have some sort of agreement (in-band, e.g., a protocol extension like POE, or out-of-band).
> Would that be workable for you?

I'm not sure what those sort of agreement means, and in fact I'm not even
sure that we have solid ways to cover all situations. Still, some products
already do that (possibly at some risks, I don't know). I regularly get
strong requests from users to implement the same features in haproxy, and
while I agree it makes sense to implement those, my reading of the spec
does not make me 100% comfortable with it, because I think that at some
times we have to ignore some important rules.

My real concern is to ensure that such an implementation does not lie to
the user, and for instance that if such a gateway is installed in front
of a server and it decides to automatically retry a request over a new
connection, the visitor will not receive two products in his mail box
and will not have is account debitted twice.

The only "reliable" solution I have against this is for the gateway to
break the client connection if the server connection dies, and let the
client retry. While this can work for the 2nd and subsequent requests
from the client, if it does this for the first request of the client's
connection, the client will display an error and will not retry. So as
you see, a gray area remains (at least for me) here.

Best regards,

Received on Tuesday, 7 June 2011 05:31:56 UTC