Re: HTTP profile for TLS 1.3 0-RTT early data?

On Thu, May 11, 2017 at 11:27:46PM +0900, Kazuho Oku wrote:
> I believe that 0-RTT data is a nice feature of TLS 1.3, and would not
> like to see its use constrained by how some deployments are designed.

Don't get me wrong, I think so as well and would like to be able to
enable it. But I don't want to give my users a totally unsafe thing
that they have no control over. What I know for sure is that most of
our users have no idea on the load balancer about the safety of the
requests they're forwarding to their hosted applications. Some of
them may apply some heuristics like block POSTs or whatever but as
mentionned by Erik its not enough and too much at the same time.

> That said, I would argue that TLS offloaders are already sending
> metadata (e.g., the cipher suite being used) to the backend server,
> and that it could possibly be extended so that the applications
> running behind the offloaders can distinguish between 0-RTT and 1-RTT
> data.

Yes, which is why I want to pass this info so that the server sends
the signal back :-) In short I'm *willing* to make it possible for
hosting providers to offer 0-RTT to customers who request based on
their explicit demand for it after they understand the impacts, but
not by default to everyone.

> One simple method (but likely to be effective way) of sending such
> metadata would be to advertise a small 0-RTT window (e.g., one or two
> MTUs) from the TLS offloader, and if client uses 0-RTT data, postpone
> establishing the connection from the TLS offloader to the backend
> until the offloader receives all the 0-RTT data (actually, I believe
> that there is high chance that you would be receiving 0-RTT data while
> doing Diffie-Hellman operations to resume the connection). Once all
> the 0-RTT data are received, the offloader will connect to the
> backend, send a chunk of metadata including the selected cipher suite
> _and_ the amount of 0-RTT application data that would follow the
> metadata.

I may be missing something obvious to you but I don't see how that
fixes the problem.

What we do in haproxy is that we always wait for a full, perfectly
valid request before passing it. Once we have it, we apply some
policy rules based on multiple L7 criteria (host header, URI,
cookies, headers, you name it) and decide where to send it, or
to redirect or to reject it. So we already wait for these data.

But my understanding of 0-RTT is that we can receive a replayed
request which will match all of this entirely. So it's not a matter
of length. What I'd want instead is to ensure that at the moment I
pass it to the server I can tell the server "received over 0-RTT"
or "received over 1-RTT", and the server be able to take the
appropriate decision.

> > Also in practice, 0-RTT will be effective at multiple places :
> >
> >           0RTT      0RTT     clear/TLS+keep-alive
> >   client  ----> CDN ----> LB ----> server
> >
> > In this chain the CDN will not know whether 0RTT is valid or not for the
> > client, and it will pass the request using 0RTT again to the origin's edge
> > made of the load balancer which has no clue either about the validity of
> > 0RTT here. Only the origin server will possibly know. But 0RTT will have
> > been used twice in this diagram. We're in the exact situation where we
> > want any agent in the chain to be able to say "4xx retry this please"
> > so that the closest agent to the client does it first and saves a lot on
> > RTT to fix the problem.
> 
> My view is that this is an issue behind origin and that it should be
> handled as such, instead creating a profile that requires a user-agent
> to resend a HTTP request.

But in terms of applications, what does it mean to you to "handle it as
such" ? The application receives an "OK" validation for a sensitive operation
over 0-RTT, it cannot realistically ask the user "Please your OK was
received over an unsafe channel, would you please confirm again ?".
We need to provide a way for the server to validate the contents one way
or another

> I am not against defining how servers
> running behind the origin should resend the requests (for better
> interoperability).

That's all I'm asking for :-)

Willy

Received on Thursday, 11 May 2017 14:52:11 UTC