W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2016

Re: Retry safety of HTTP requests

From: Erik Nygren <erik@nygren.org>
Date: Wed, 23 Mar 2016 14:37:59 -0400
Message-ID: <CAKC-DJiMjWPwBO8dPLyv9Ye_ZWWK34Zv_NV0su4tKsFohX7ovg@mail.gmail.com>
To: Subodh Iyengar <subodh@fb.com>
Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
This post on attacks against POST retries in HTTPS is also worth reading
for those who haven't seen it:

         http://blog.valverde.me/2015/12/07/bad-life-advice/#.VvLe6rMpDmE

(I was a little surprised to see that the behavior of browsers had shifted
over the years to transparently retry POSTs over broken connections by
default.)

On Tue, Mar 22, 2016 at 1:55 AM, Subodh Iyengar <subodh@fb.com> wrote:

>
> 2) Some transport protocols which treat retry safe requests differently
> from non retry safe requests. For example in TLS 1.3, idempotent requests
> may be sent in a 0-RTT flight, which reduces the latency for the request.
> An application might desire a non idempotent request which is retry safe be
> sent in this 0-RTT flight.
>

Given the discussions on the TLS WG around TLS 1.3 0RTT mode,
it seems likely that a requirement of using TLS 1.3 0RTT in any application
protocol will be a document describing the binding for when and how it is
safe for that
application to use replayable 0RTT data, rather than leaving this to chance.
>From that perspective, I think Subodh's proposal here could be a good start
towards something that could include a safety story for HTTP
over replayable TLS 1.3 0RTT early data.  Especially with TLS 1.3 is going
to the extreme of specifying a different application interface for sending
attacker-replayable early data, HTTP can't just blindly use that new
interface without clear ground rules without introducing new
vulnerabilities.

Having clients start to implement HTTP over TLS 1.3 0RTT early data
without a clear set of ground rules is scary from a server operator
perspective
since some of the responsibility for safety here has to be on the client
knowing
what is and isn't possible to send early.  In large-scale globally
distributed
server environments there may not be a sane way to reliably protect against
replays while providing reasonable performance.  It's the same places where
0RTT anti-replay protections don't work reliably where application layer
anti-replay protections
become fragile and much harder to make work reliably on the server side
than on the client side.

           Erik
Received on Wednesday, 23 March 2016 18:38:28 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 23 March 2016 18:38:31 UTC