Re: HTTP profile for TLS 1.3 0-RTT early data?

On Fri, May 12, 2017 at 12:30:18AM +0900, Kazuho Oku wrote:
> > I may be missing something obvious to you but I don't see how that
> > fixes the problem.
> 
> The kind of the deployment I was considering was one that deploys a
> TLS terminator that just decrypts the data and sends to backend (i.e.
> TLS to TCP, not an HTTP reverse proxy). My idea was that you could add
> a new field to the Proxy Protocol that indicates the amount of 0-RTT
> data that follows the header of the PROXY Protocol.

OK I got it now. However that would be very wrong from an HTTP point
of view as we'd be passing byte counts of header data, which everyone
transforms along the chain so that's not relevant anymore when it reaches
the application (which has no such notion by the way since it accesses
its headers from an array).

> > What we do in haproxy is that we always wait for a full, perfectly
> > valid request before passing it. Once we have it, we apply some
> > policy rules based on multiple L7 criteria (host header, URI,
> > cookies, headers, you name it) and decide where to send it, or
> > to redirect or to reject it. So we already wait for these data.
> >
> > But my understanding of 0-RTT is that we can receive a replayed
> > request which will match all of this entirely. So it's not a matter
> > of length. What I'd want instead is to ensure that at the moment I
> > pass it to the server I can tell the server "received over 0-RTT"
> > or "received over 1-RTT", and the server be able to take the
> > appropriate decision.
> >
> >> > Also in practice, 0-RTT will be effective at multiple places :
> >> >
> >> >           0RTT      0RTT     clear/TLS+keep-alive
> >> >   client  ----> CDN ----> LB ----> server
> >> >
> >> > In this chain the CDN will not know whether 0RTT is valid or not for the
> >> > client, and it will pass the request using 0RTT again to the origin's edge
> >> > made of the load balancer which has no clue either about the validity of
> >> > 0RTT here. Only the origin server will possibly know. But 0RTT will have
> >> > been used twice in this diagram. We're in the exact situation where we
> >> > want any agent in the chain to be able to say "4xx retry this please"
> >> > so that the closest agent to the client does it first and saves a lot on
> >> > RTT to fix the problem.
> >>
> >> My view is that this is an issue behind origin and that it should be
> >> handled as such, instead creating a profile that requires a user-agent
> >> to resend a HTTP request.
> >
> > But in terms of applications, what does it mean to you to "handle it as
> > such" ? The application receives an "OK" validation for a sensitive operation
> > over 0-RTT, it cannot realistically ask the user "Please your OK was
> > received over an unsafe channel, would you please confirm again ?".
> > We need to provide a way for the server to validate the contents one way
> > or another
> 
> What I am arguing against is creating a specification that suggests an
> user agent (not an intermediary) to resend HTTP request on a 0-RTT
> enabled TLS connection. 0-RTT is an optimization for connections with
> big latency. Such connections typically cost certain amount of money.
> Asking the user agent to resend a HTTP request that has been sent over
> a 0-RTT connection not only eliminates the merits provided by 0-RTT
> but also doubles the consumed bandwidth. That does not sound right to
> me. I think the right advise we should provide for such case is: turn
> off 0-RTT.

In fact here I disagree. It doesn't eliminate this if it happens rarely
and allows the client to aggressively make use of 0-RTT because it knows
that if that's not accepted it will work. It's better than having the
client censor itself saying "this is a POST, it's not reasonable". If
the server consumes mostly POSTs and 90% of them are idempotent, the
client will benefit from 90% 0-RTT and 10% retries instead of 100%
1-RTT, which overall is still a win.

Also the server response could provide hints regarding what is allowed
or not. It may reply "0-RTT: {never,non-post,get,not-this-one}" etc so
that the client can better adapt to ensure there are very few retries.

I'd rather avoid to reproduce the HTTP/1 pipelining trouble we've all
met, by seeing a powerful performance improvement almost always disabled
because there's no way to recover from a failure.

> Between the origin and the application server, I think that we can and
> also need to resend HTTP requests. The cost of the network is
> typically lower than the connection to the end user.

But it's also where it's the hardest because that's where you'd start
to have to buffer everything in order to retry, which very often is
not technically feasible, or requires limiting receipt to what can
be retransmitted, and sending congestion back to the client side
while waiting for the server's acceptance.

> Connections
> between the two can be kept alive for much longer time to avoid the
> overhead of connection establishment. And as you have discussed, the
> act of coalescing incoming requests from multiple connections
> inevitably means that we need to notify the application server if each
> HTTP request has been received entirely by 0-RTT by using a HTTP
> header (hence resend becomes unavoidable).

Absolutely.

Willy

Received on Thursday, 11 May 2017 16:26:01 UTC