W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2017

Re: [TLS] Fwd: New Version Notification for draft-thomson-http-replay-00.txt

From: Martin Thomson <martin.thomson@gmail.com>
Date: Mon, 26 Jun 2017 10:44:19 -0700
Message-ID: <CABkgnnXRDTnsJArRsOhwsjOPLYvHrMkNG0=dNQBnjq4VBO9TqA@mail.gmail.com>
To: Benjamin Kaduk <bkaduk@akamai.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
Thanks for your thoughts Ben,

On 26 June 2017 at 08:32, Benjamin Kaduk <bkaduk@akamai.com> wrote:
> I do think that it's worth mentioning on this list the
> qualitative distinction between tens of replays and billions of replays that
> was made on the TLS list.

https://github.com/martinthomson/http-replay/pull/17

(You also mentioned things like cookies and whether we might have
other guidance for how clients might restrict what they send.  We
currently focus on request methods, and take the extremely
conservative position of recommending only "safe" methods.)

> I'm also a little surprised that there is no discussion of whether a request
> is permitted to be sent partially in 0-RTT and partially in 1-RTT data and
> what the semantics are for such a request -- it just talks about "requests"
> that are sent "immediately", "requests in early data", and the like.

This is actually intentional, but we should say that explicitly.

https://github.com/martinthomson/http-replay/issues/16

> In section 2, I'm not sure that we need to mention the TLS-native
> strateg(ies) (item 4).

I think that it's important to mention, if only because a lot of the
other defenses rely on that point you made earlier about reducing the
potential billions down to something more manageable.  It's especially
relevant when you are worrying about leakage through side-channels.

> There is also some potential subtlety in item (3),
> relating to whether the server decides to respond with 4NN (Too Early) in a
> deterministic fashion *across all servers that might handle the request*
> based solely on the contents of the request (which is a very safe strategy)
> or also includes information about the rate of incoming requests/etc. in the
> decision (which could lead to some successful replays).

That might be too subtle in the sense that it relies on someone making
a non-obvious decision about how to handle replays.  Hopefully we have
provided enough information for those people to make the right
decision, because I can't think of any easy way to first introduce the
notion of treating requests differently across a different server
instances and then the consequences of that choice.

> I think you have some good text about the server being able to accept or
> reject the risk of replay for a given request, and the client being able to
> decide on risk when creating requests.
>
> However, I'm not sure this claim is accurate:
>
>    [...] In general, if a request does not have state-
>    changing side effects on a resource, the consequences of replay are
>    not significant.
>
> with respect to the feasibility of side-channel attacks on the preparation
> of the response.  (And the hopefully obvious note that affecting/determining
> whether or not a resource is in a particular cache is a side effect.)

I think that it is accurate, though as you correctly observe, what
people think of as state-changing side-effects is sometimes far more
limited in scope.  That's dangerous.  I'm proposing a tweak to the
language, but thinking about whether there needs to be more exposition
on this point.

https://github.com/martinthomson/http-replay/pull/18

> I would clarify that the inability to selectively reject early data is at
> the TLS layer,

Done, thanks.

> In section 3, is there a good justification for leaving as only SHOULD NOT
> send unsafe methods in early data?  If this is something security sensitive,
> it would seem that there is some rationale for making it mandatory.

There are plenty of cases where a client knows that a request is safe
to attempt, and a server is always able to make the same
determination.

> Also, what might
> cause a client to abandon those requests?  Should we give
> examples/reasoning?

This is totally generic, and so specific reasons for abandoning are
hard to scope properly.

> (Token binding is one thing that comes to mind, as the
> requests would need to be regenerated with the proper bindings;

Ahh, 0-RTT token binding is a horror.  This is why generally the
"start over" thing is important.  I think that the best way to
implement token binding is to decorate requests as they get written to
the socket, so that the header field is not (incorrectly) attached to
the thing that is retried.

> asking for clarity on having the decision to abandon the request be
> specifically due to the early data rejection vs. just the client is not
> interested in the response anymore

This ambiguity was intentional from my perspective.  On the one hand,
I don't think that rejection of early data is good grounds for
deciding to abandon a request and the "close the tab" example is a
better basis for abandoning a request.  But I also recognize that
retrying does create that one-time replay exposure and I didn't want
to expressly prohibit deciding to abandon requests if early data is
rejected.

> Section 4.1 adds the "Early-Data" header; just to confirm my understanding,
> this is just about ensuring that the semantic difference between early data
> and regular data gets conveyed properly through multiple hops -- it does not
> try to present a uniquifying key as Cf-0rtt-Unique did, so "Early-Data" can
> actually succeed in its designated role.  (Well, presuming that
> implementations comply with the spec, I suppose.)

Correct.  The unique key might have value, but we're looking for the
minimum viable solution here and I can't convince myself that the
unique key adds enough to warrant standardizing something like that.

> There is also some small
> potential for reader confusion in that retries can (but perhaps should not)
> be initiated by the TLS stack on 0-RTT negotiation failure, and also by the
> HTTP stack on 4NN (Too Early), but only one of those will come into effect
> for any given request.

TLS already covers this adequately:

> If the server rejects the “early_data” extension, the client application MAY opt to retransmit early data once the handshake has been completed. Note that automatic re-transmission of early data could result in assumptions about the status of the connection being incorrect. For instance, when the negotiated connection selects a different ALPN protocol from what was used for the early data, an application might need to construct different messages. Similarly, if early data assumes anything about the connection state, it might be sent in error after the handshake completes.

> Editorial nit:
>
> In section 2:
>
>    A server can limit the amount of early data with the
>    "max_early_data_size" field of the "early_data" TLS extension.  This
>    can be used to avoid committing an arbitrary amount of memory for
>    deferred requests.  A server SHOULD ensure that when it accepts early
>    data, it can defer processing of requests until after the TLS
>    handshake completes.
>
> "it can defer processing" might mention that this means the server must have
> sufficient resources available to store the buffered requests.

I'm not happy with this text but the mention of "max_early_data_size"
was to specifically direct attention to the resource commitment
constraint.
Received on Monday, 26 June 2017 17:44:54 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:15:03 UTC