Re: p1 7.2.4: retrying requests

On 06/03/2011 10:32 PM, Willy Tarreau wrote:
>
>> In my experience, implementations do retry GETs, but not POSTs, and they often use a fresh connection for POSTs (et al) to avoid just this situation.
>>
>> So, I'm inclined to just drop 7.2.4, but maybe I don't know the whole story. Thoughts?
>>      
> I agree with you. However, this point still leaves open a small hole where
> a difficult case still exists : some gateways are using connection pools to
> servers and aggregate incoming requests over those existing connections. In
> theory those connections never die (in theory...). If a connection dies during
> an idempotent request, it's easy to retry it. POSTs are also sent over those
>    

Hi, I noticed the stickiness over the use of the word "retry" when there 
is already the availability of "recovery" mode, especially in mesh 
networks were some prefer to simply apply error correction instead of 
retry or recovery. Whatever flow, our optimizations agree on law of 
conservation.


> connections, at the risk of loosing them and having to retry them. In my
> opinion, adding an "Expect: 100-Continue" header to those requests is enough
> to ensure they are sent over a valid connection. But the issue remains when
>    

That depends if that state exists in an inner connection or outer 
connection. In&out can be virtualized in many ways, so no specific 
hardware can deem what to expect besides diagnostic mode (which we can 
always say this mode does not agree with law of conservation). I suggest:

 > Expect: 100-Continue / media-type

If the media type does not agree then another connection for retries or 
recovery. That clarifies the "Expect:" header as some signal for verbose 
mode (of particular media states) or as the pseudo-trace method (chunked 
mode).


> there is an empty body with the POST, because if we set the Expect header
> with the empty body, we'll cause a deadlock (or the server might notice it
> and proceed anyway).
>
> So for empty POST requests, we still have no means of testing the connection
> before reusing it. Or maybe by using chunked encoding and sending the 0<CRLF>
> after 100 is received, provided the server accepts chunked encoded requests ?
>
> In fact, connection pooling becomes so much common nowadays that I think we
> should ensure any implementation gets all corner cases right than just saying
> they can't replay a POST over a broken connection, because they'll do stupid
> or dangerous things to get it working anyway (and if you knew the number of
> people I encounter who are amazed that a POST must not be blindly replayed...).
>    

I know of cases where every request is immediately logged, the 
connection closed, and later the response is sent after processed and 
another connection is opened. Sounds like whatever frustration is of 
cause above, the HTTP Status 202 Accepted is not being issued 
appropriately, expected, or allowed. Basically, if the origin server 
already knows the response cannot be given by the end of the request 
(processed or not) then 202 Accepted should occur, which prevents retries.


-- 
--- https://twitter.com/Dzonatas_Sol ---
Web Development, Software Engineering, Virtual Reality, Consultant

Received on Saturday, 4 June 2011 13:20:58 UTC