- From: Wenbo Zhu <wenboz@google.com>
- Date: Fri, 10 Feb 2017 19:16:57 -0800
- To: Mark Nottingham <mnot@mnot.net>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>, "Roy T. Fielding" <fielding@gbiv.com>
- Message-ID: <CAD3-0rPvB0BwBhEF0ybd3402Spk2DQeo_P7JSCa1CC2NJdX=wQ@mail.gmail.com>
> "with some sites even requiring browsers to retry POST requests in order to properly interoperate" Do we know the exact reason (and scale) behind such a behavior? On Fri, Feb 3, 2017 at 10:03 PM, Roy T. Fielding <fielding@gbiv.com> wrote: > > On Feb 1, 2017, at 12:41 PM, Tom Bergan <tombergan@chromium.org> wrote: > > > > > Applications sometimes want requests to be retried by > > > infrastructure, but can't easily express them in a non-idempotent > > > request (such as GET). > > > > nit: did you mean "in an idempotent request (such as GET)"? > > > > > A client SHOULD NOT automatically retry a failed automatic retry. > > > > Why does RFC 7230 say this? I am aware of HTTP clients that completely > ignore this suggestion, and I can't offhand think of a reason why this is a > good rule-of-thumb to follow. > > This is only referring to retries due to a dropped connection. The reason > is because a > second connection drop is (in almost all cases) due to the request itself, > as opposed to > something transient on the network path. [BTW, this doesn't refer to > requests yet to be > sent in a request queue or pipeline -- just the retried request in flight > for which no response > is received prior to FIN/RST (or equivalent).] > > > There might be a good reason to go ahead and retry with an exponential > back-off, > but I don't know what that would be in general. I know lots of clients do > stupid > things because they are afraid of communicating server errors to their > user. > > ....Roy > > >
Received on Saturday, 11 February 2017 03:17:31 UTC