W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2016

Re: Retry safety of HTTP requests

From: Cory Benfield <cory@lukasa.co.uk>
Date: Tue, 22 Mar 2016 10:14:09 +0000
Cc: Amos Jeffries <squid3@treenet.co.nz>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <6AA45484-F745-4C16-BD57-8A4B20E7A438@lukasa.co.uk>
To: Subodh Iyengar <subodh@fb.com>

> On 22 Mar 2016, at 08:44, Subodh Iyengar <subodh@fb.com> wrote:
> Realized I forgot to answer one more question of yours:
>> Why not use the REST semantics of HTTP itself as the signal?
> We have several POST mutations from the app, however the realistic scenario is that a lot of these requests will fail in bad networks, most times without a HTTP/2 GOAWAY. In that case what should we do? We have 2 choices, give up and assume that the server might have seen the data and leave the user with a bad experience, or build an application layer idempotency. The retry safety property allows our HTTP library to know that certain requests even mutating ones have idempotent properties and it is free to retry the requests even if the underlying request fails.

I’m a little unclear on this.

How it retry-safety established from the perspective of the client? Is it application specific? If it is, there’s no need for this WG to get involved: your HTTP libraries can simply have a flag that marks some requests as retry-able (and indeed, the libraries I work on all provide this functionality).

However, I seem to understand your request as wanting retry-safety to be communicated over the HTTP channel itself. The only way I can see that happening is if a server communicates that fact: some kind of header, frame, or flag that says “this request can safely be retried”. That, of course, only works if the client actually receives the response.

Given that that doesn’t address your actual problem (when no response is received), I can only assume you’d be hoping for some form of caching where a client learns from the server what requests are safe to retry and then remembers that for the future. Such a model could absolutely work, but it’s not clear to me that there’s much gained over the standard retry logic based on idempotency and safety that HTTP already defines.

If that really is what you’re considering pursuing, I highly recommend using FB’s large scale and ability to grab analytics to implement this change and then instrument it. Have your server return these headers, and have your clients cache and act appropriately, then check how that affects user experience. You could even A/B test it.

With data in hand that proves the value of such a change, you’d have a lot more weight to make the argument that *some* signaling should be done at the HTTP layer.


Received on Tuesday, 22 March 2016 10:14:42 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 22 March 2016 12:47:11 UTC