Re: HTTP profile for TLS 1.3 0-RTT early data?

[late to the party; pulling snippets from multiple messages]

On 05/13/2017 08:02 PM, Martin Thomson wrote:
> Yes, this is going to be subjective.  The great thing about the
> strategy that we have here is that nothing is vulnerable to replay
> unless both client and server agree to that risk.

I think in order to strictly get the property that the server has agreed
to that risk we need the new response code "try again 1-RTT", to
accommodate servers that cannot buffer the entire 0-RTT request or other
edge cases.  But I don't think that is controversial to have.

> It's much trickier for intermediaries, who make this decision without
> a great deal of information, but the calculus is still fundamentally
> the same.

I don't think we should let a lack of clarity on what the proxy
situation will look like keep us from writing up what to do for direct
client/server interactions -- yes, proxies would be restricted to 1-RTT,
but that's the state of affairs with TLS 1.2 anyway.



On 05/12/2017 03:21 AM, Stefan Eissing wrote:
> What a client wants to send early, might not be what the server is willing
> to process right away. Servers can always chose to not enable early data at 
> all, or, in case of doubt about the data, wait for the handshake to complete.
>
> At least in h2 it seems to be clear how to do that, is there a way for 
> http/1.1 to do the same? Is there maybe a generic TLS1.3 way to do that?

There cannot be a generic TLS 1.3 way to do so; it would be a layering
violation.  TLS just moves the bytes; the interpretation ("willing to
process right away") must be done by the application layer.  The
application is of course free to implement the "buffer everything until
handshake completion" strategy, but I am not sure we want to mandate
that level of buffering.

> If we can define a way for TLS receivers to wait for the handshake, then
> there is no need to expect failure from early data, except servers not
> following this strategy or being reconfigured between sessions.

I don't think there's a reliable way to do that, and we'll have to
provision a way for clients to handle early data failures.




On 05/12/2017 06:36 AM, Kazuho Oku wrote:
> 2017-05-12 17:21 GMT+09:00 Poul-Henning Kamp <phk@phk.freebsd.dk>:
>> --------
>> In message <CANatvzw9zeVfQMBCXGhakFomL2MaTqFNF65jKWHCDXQQ8rxcTA@mail.gmail.com>
>> , Kazuho Oku writes:
>>
>>> The kind of the deployment I was considering was one that deploys a
>>> TLS terminator that just decrypts the data and sends to backend (i.e.
>>> TLS to TCP, not an HTTP reverse proxy). My idea was that you could add
>>> a new field to the Proxy Protocol that indicates the amount of 0-RTT
>>> data that follows the header of the PROXY Protocol.
>> Can you even know that at the time you send the PROXY protocol header ?
> Actually, you do not even need to know or transmit the exact amount of
> the 0-RTT data when you transmit the PROXY protocol header. What you
> need to transmit is the amount of replayable data.

I don't think that information will always be available, as more 0-RTT
data may still be arriving.

> I believe that there could be several ways, but something like below
> should work.
>
> * postpone sending the PROXY Protocol header until receiving the first
> flight of 0-RTT data
> * include the size of the 0-RTT data being received as an attribute in
> the PROXY Protocol header that is sent to the backend

There can be multiple flights of 0-RTT data; you don't know it's ended
until the EndOfEarlyData, which can be quite late.

> * postpone forwarding the rest of 0-RTT data to the server until the
> handshake succeeds

Does this not require committing to buffer everything (else)?  It's far
from clear that all devices will be able to do so.

> The assumption behind the approach is that the 0-RTT data from the
> client (e.g. HTTP request) will likely fit in one packet, possibly in
> the same packet that carries ClientHello (or in a few number of
> packets that will arrive while the server does DH operation). The
> approach also uses the fact that a successful handshake can be used to
> rule out the possibility of the 0-RTT data being replayed by an
> attacker.


I cannot see how that would be a widely valid assumption, let alone
universally valid.

In the general case, I would expect it to be possible to transmit
multiple complete (h2) requests in 0-RTT data.  Is your proposal to only
let the first one through and buffer the rest?




On 05/11/2017 12:15 PM, Ilari Liusvaara wrote:
> Reading the cryptographer's concerns regarding 0-RTT, the main ones
> seem to be:
>
> - Possible lack of strong anti-replay. Possibly leading to >10^6 (!!!)
>   replays. 
>   * This is enough to exploit pretty subtle side-channel attacks. And
>     these attacks are at least extremely difficult to defend against.
>   * This causes severe problems for high-performance rate-limiting
>     systems.

I mostly expect "all" servers to rate limit 0-RTT acceptances to
O(10^4.5) per second per SNI name (or similar), i.e., a level that is
not expected to trigger during normal operation but would slow down a
side channel attack.  Perhaps we should include guidance on this.




On 05/11/2017 10:30 AM, Kazuho Oku wrote:
> What I am arguing against is creating a specification that suggests an
> user agent (not an intermediary) to resend HTTP request on a 0-RTT
> enabled TLS connection. 0-RTT is an optimization for connections with
> big latency. Such connections typically cost certain amount of money.

"Says who?"  There are many potential use cases for 0-RTT, not all
written down.

My understanding is that browsers are latching onto research showing
that even 20 milliseconds of extra delay can affect user response
statistics, and trying to go as fast as possible.  That applies even on
broadband, not just on satellite.

> Asking the user agent to resend a HTTP request that has been sent over
> a 0-RTT connection not only eliminates the merits provided by 0-RTT
> but also doubles the consumed bandwidth. That does not sound right to
> me. I think the right advise we should provide for such case is: turn
> off 0-RTT.

I don't think we should recommend the user agent to resend HTTP requests
from rejected 0-RTT, but I think we will encounter some situations where
it is unavoidable.  We can give guidance on how to best avoid it, of course.




On 05/11/2017 06:31 AM, Kazuho Oku wrote:
> So, while I agree that it is beneficial to have an agreement on how
> the interaction scheme between the origin server and the application
> running behind (possibly as an informational RFC), I do not see a
> strong reason that we need to introduce some kind of profile due the
> introduction of 0-RTT data in TLS 1.3.

Just to clarify: it is mandatory that we have a profile document that
specifies the interaction between TLS 0-RTT data and HTTP; the TLS 1.3
specification says that "[p]rotocols MUST NOT use 0-RTT data without a
profile that defines its use".  That profile document may or may not end
up being very simple, but it must exist.



On 05/10/2017 07:23 PM, Mark Nottingham wrote:
>> 3) Being very explicit on how to handle early data that is rejected and forced to be resent following connection establishment (ie, when the server forces a client into 1-RTT mode by rejecting the early data).  A worst case would be if a server actually handled the early data anyways but the client thought the server had rejected it.  In HTTP/1.1 this could result in them being off-by-one in requests/responses (and could lead to some HTTP Request Smuggling style bugs and vulnerabilities.)
> I'd hope that TLS/QUIC are explicit here, and that we wouldn't need to be; that's the beauty of layered protocols :)

Well, TLS/QUIC are supposed to be explicit about whether early data is
rejected at the TLS layer, and we can probably rely on that.  But, (1)
that just means that the data is passed on to the application/HTTP
stack, which has its own crack to reject (subsets of) the early data. 
And, of course, (2) there can always be implementation bugs.  So we'll
want to be careful about the semantics of what rejection at the
different layers mean and how it is handled.

-Ben

Received on Monday, 15 May 2017 23:37:30 UTC