Re: New Version Notification for draft-thomson-http-replay-00.txt

> While I understand that such an issue exists, I am not sure if it is a
replay attack.


A better way to think about it might be that mItm could always hold back the request even now, but he can't confuse the origin that the request came from 0-rtt or not 0-rtt because no such promise exists right now, however with this mechanism he can confuse the backend with a retry. So I think such an issue should be in scope for this discussion.

I don't think this mechanism is meant to stop replays, it's just meant to give the app the right information for it to make a decision of whether or not the client followed it's policy of sending data in 0-rtt. Replays are protected more by the transport rather than the application. For example if I had a transport without replay protection, this mechanism would not protect me from the classic replay safety attack of an attacker sending "safe" requests again and again and getting information from the length of the repsonse, if I was okay with the request being sent over 0-rtt.


Subodh

________________________________
From: Kazuho Oku <kazuhooku@gmail.com>
Sent: Wednesday, July 19, 2017 7:32:25 AM
To: Subodh Iyengar
Cc: Mike Bishop; Martin Thomson; HTTP Working Group
Subject: Re: New Version Notification for draft-thomson-http-replay-00.txt

2017-07-19 16:20 GMT+02:00 Subodh Iyengar <subodh@fb.com>:
> Martin, a few others and I discussed this draft offline just after the HTTP
> WG meeting and I believe there is an extension of the dkg style attack
> possible on the proposal currently.
>
> I'm making the following assumptions:
>      * There is no special API to handle 0-rtt data by the TLS terminator,
> i.e. it is treated as a part of the same 1-rtt stream of data
>
> A Terminating proxy uses one of 2 approaches to decide to set Early-Data
> upstream:
>
> The terminating proxy just checks for isHandshakeFinished() to determine
> whether or not the data was send on 0-rtt or not, and sets the Early-Data
> accordingly
> The server buffers the entire request till it gets the Finished message and
> then forwards the requests without early data header
>
> There are 2 timeouts on the client, a transport timeout which is larger and
> a request timeout which is smaller
> The client will potentially retry requests that timeout potentially on a new
> connection
> A client decides to send a non-idempotent request over 0-rtt and relies on
> the server to reject it
>
>
> Let's say the following things happen:
>
> A client sends a request to the Terminating proxy over 0-rtt.
> A MITM forwards the client hello to the proxy, gets the server hello etc.
> and forwards it to the client and gets the earlydata and finished message.
> The MITM then holds back the request and the finished message.
> The request would time out on the client and the client retries on a new
> connection over 1-rtt to another server.
> The transport has not timed out yet.
> The MITM then releases the original data to the proxy
> Since the proxy received the data and finished together, it would consider
> it to not be early data and not forward the early data header to the
> upstream
> The upstream server now has received the same request twice without knowing
> that it was sent over 0-rtt so it would not reject them and if it didn't
> have another idempotency mechanism, would execute it twice.

While I understand that such an issue exists, I am not sure if it is a
replay attack.

To me it seems like a retry attack.

It seems to me that the impact of the attack is equal to when a MITM
holds back the packet holding an HTTP response to cause the client
retry the request.

Or am I missing something?

> Any strategy that does not provide a custom API for servers to tell the
> difference between 0-rtt and non 0-rtt data suffers from this problem.
> However custom APIs are painful to work with.
>
> Fortunately I think there is a simple fix to this though which is setting
> the early data header from the client directly and then the proxy can just
> forward it through. I understand that this cannot be exactly determined by
> the client, but it could be conservative about it.
>
> Subodh
> ________________________________
> From: Mike Bishop <Michael.Bishop@microsoft.com>
> Sent: Thursday, June 22, 2017 3:40:03 PM
> To: Martin Thomson
> Cc: HTTP Working Group
> Subject: RE: New Version Notification for draft-thomson-http-replay-00.txt
>
> Ah, just terminology, then.  I was reading "gateway" as "intermediary," not
> as "reverse proxy."  A reverse proxy is much more likely to have a close
> configuration relationship with its back-end server(s).
>
> -----Original Message-----
> From: Martin Thomson [mailto:martin.thomson@gmail.com]
> Sent: Thursday, June 22, 2017 3:24 PM
> To: Mike Bishop <Michael.Bishop@microsoft.com>
> Cc: HTTP Working Group <ietf-http-wg@w3.org>
> Subject: Re: New Version Notification for draft-thomson-http-replay-00.txt
>
> On 23 June 2017 at 03:38, Mike Bishop <Michael.Bishop@microsoft.com> wrote:
>> I like most of it, but the second paragraph in 5 seems a little hand-wavy.
>> The gateway is supposed to "know" the server supports this new standard,
>> which it can only fully do if it has received a 4NN in the past, which would
>> only happen if it knew in the past, which....  Chicken, meet egg.
>
> I don't think that it is as hand-wavy as all that.  Keep in mind that as a
> gateway for HTTPS, the gateway has the keys for the origin server.  That
> means that there is a pretty strong relationship there.
> The gateway can use that.  For instance, a CDN might delay requests
> unconditionally unless their customer has provided an override that enables
> immediate forwarding.
>
> This is less of a chicken and egg issue even if the gateway is forced to
> hold requests.  As we well know, the number of requests that can fit into
> the first round trip is limited.  That's why header compression is so
> useful.  Early data gives the client a little more space for sending
> requests.  It's not much, but it's something.



--
Kazuho Oku

Received on Wednesday, 19 July 2017 14:50:01 UTC