Re: New Version Notification for draft-nottingham-httpbis-retry-01.txt

On Fri, Feb 3, 2017 at 10:03 PM, Roy T. Fielding <fielding@gbiv.com> wrote:

> > On Feb 1, 2017, at 12:41 PM, Tom Bergan <tombergan@chromium.org> wrote:
> >
> > > Applications sometimes want requests to be retried by
> > > infrastructure, but can't easily express them in a non-idempotent
> > > request (such as GET).
> >
> > nit: did you mean "in an idempotent request (such as GET)"?
> >
> > > A client SHOULD NOT automatically retry a failed automatic retry.
> >
> > Why does RFC 7230 say this? I am aware of HTTP clients that completely
> ignore this suggestion, and I can't offhand think of a reason why this is a
> good rule-of-thumb to follow.
>
> This is only referring to retries due to a dropped connection. The reason
> is because a
> second connection drop is (in almost all cases) due to the request itself,
> as opposed to
> something transient on the network path.  [BTW, this doesn't refer to
> requests yet to be
> sent in a request queue or pipeline -- just the retried request in flight
> for which no response
> is received prior to FIN/RST (or equivalent).]
>

There seem to be many underlying assumptions. I'm glad that Mark is making
things more explicit. For example, "a second connection drop is (in almost
all cases) due to the request itself" likely does not hold in flaky
cellular networks, such as getting on/off a subway. This is why Chrome will
automatically retry a failed page load when it detects a change in network
connectivity. Also think of the "parking lot problem" where you transition
between WiFi in the office and cellular in the parking lot.

I'm not sure your claim is strictly true even assuming good network
connectivity. A data point: I help run a very large proxy service. When we
experimented with automatic retries, we saw a ~40% success rate on the
first retry, ~20% on the second retry, and ~5% on the third retry. We only
retried GETs and HEADs where the connection was dropped before we received
a single byte of response. That ~20% success rate is arguably high enough
to make a second retry worthwhile (depending on your POV).

That said, I'm not trying to convince you that HTTP clients should send
multiple retries. Rather, I'm surprised that the HTTP spec takes a position
on this question in the first place. Retry logic seems like a fundamentally
application-specific property, beyond really basic things like "requests
with idempotent methods must be safe to retry."

There might be a good reason to go ahead and retry with an exponential
> back-off,
> but I don't know what that would be in general.


It is common to use exponential backoff with RPC-over-HTTP (one example
<https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md>).
Speaking of RPC-over-HTTP, in Section 2.1 of Mark's document, perhaps we
could generalize "User Retries" to "Application Retries"? As written,
Section 2.1 is kind of specific to browsers and it's not clear how
non-browser applications fit in. RPC could be one such application. Also
think of programs like daemons that make HTTP requests (often
RPC-over-HTTP) without any direct prompting from a user.

Received on Saturday, 4 February 2017 19:25:18 UTC