RE: New Version Notification for draft-nottingham-httpbis-retry-01.txt

Some assorted reasons show up in the discussion at starting around 1:15:30.  Scale, I’m not sure; Patrick had the direct experience and might be able to tell more.

From: Wenbo Zhu []
Sent: Friday, February 10, 2017 7:17 PM
To: Mark Nottingham <>
Cc: HTTP Working Group <>; Roy T. Fielding <>
Subject: Re: New Version Notification for draft-nottingham-httpbis-retry-01.txt

> "with some sites even requiring browsers to retry POST requests in order to properly interoperate"

Do we know the exact reason (and scale) behind such a behavior?

On Fri, Feb 3, 2017 at 10:03 PM, Roy T. Fielding <<>> wrote:
> On Feb 1, 2017, at 12:41 PM, Tom Bergan <<>> wrote:
> > Applications sometimes want requests to be retried by
> > infrastructure, but can't easily express them in a non-idempotent
> > request (such as GET).
> nit: did you mean "in an idempotent request (such as GET)"?
> > A client SHOULD NOT automatically retry a failed automatic retry.
> Why does RFC 7230 say this? I am aware of HTTP clients that completely ignore this suggestion, and I can't offhand think of a reason why this is a good rule-of-thumb to follow.

This is only referring to retries due to a dropped connection. The reason is because a
second connection drop is (in almost all cases) due to the request itself, as opposed to
something transient on the network path.  [BTW, this doesn't refer to requests yet to be
sent in a request queue or pipeline -- just the retried request in flight for which no response
is received prior to FIN/RST (or equivalent).]

There might be a good reason to go ahead and retry with an exponential back-off,
but I don't know what that would be in general. I know lots of clients do stupid
things because they are afraid of communicating server errors to their user.


Received on Saturday, 11 February 2017 04:24:08 UTC