- From: Kazuho Oku <kazuhooku@gmail.com>
- Date: Fri, 12 May 2017 00:30:18 +0900
- To: Willy Tarreau <w@1wt.eu>
- Cc: Stefan Eissing <stefan.eissing@greenbytes.de>, Mark Nottingham <mnot@mnot.net>, Erik Nygren <erik@nygren.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, "Ponec, Miroslav" <mponec@akamai.com>, "Kaduk, Ben" <bkaduk@akamai.com>
2017-05-11 23:51 GMT+09:00 Willy Tarreau <w@1wt.eu>: > On Thu, May 11, 2017 at 11:27:46PM +0900, Kazuho Oku wrote: >> I believe that 0-RTT data is a nice feature of TLS 1.3, and would not >> like to see its use constrained by how some deployments are designed. > > Don't get me wrong, I think so as well and would like to be able to > enable it. But I don't want to give my users a totally unsafe thing > that they have no control over. What I know for sure is that most of > our users have no idea on the load balancer about the safety of the > requests they're forwarding to their hosted applications. Some of > them may apply some heuristics like block POSTs or whatever but as > mentionned by Erik its not enough and too much at the same time. > >> That said, I would argue that TLS offloaders are already sending >> metadata (e.g., the cipher suite being used) to the backend server, >> and that it could possibly be extended so that the applications >> running behind the offloaders can distinguish between 0-RTT and 1-RTT >> data. > > Yes, which is why I want to pass this info so that the server sends > the signal back :-) In short I'm *willing* to make it possible for > hosting providers to offer 0-RTT to customers who request based on > their explicit demand for it after they understand the impacts, but > not by default to everyone. > >> One simple method (but likely to be effective way) of sending such >> metadata would be to advertise a small 0-RTT window (e.g., one or two >> MTUs) from the TLS offloader, and if client uses 0-RTT data, postpone >> establishing the connection from the TLS offloader to the backend >> until the offloader receives all the 0-RTT data (actually, I believe >> that there is high chance that you would be receiving 0-RTT data while >> doing Diffie-Hellman operations to resume the connection). Once all >> the 0-RTT data are received, the offloader will connect to the >> backend, send a chunk of metadata including the selected cipher suite >> _and_ the amount of 0-RTT application data that would follow the >> metadata. > > I may be missing something obvious to you but I don't see how that > fixes the problem. The kind of the deployment I was considering was one that deploys a TLS terminator that just decrypts the data and sends to backend (i.e. TLS to TCP, not an HTTP reverse proxy). My idea was that you could add a new field to the Proxy Protocol that indicates the amount of 0-RTT data that follows the header of the PROXY Protocol. > What we do in haproxy is that we always wait for a full, perfectly > valid request before passing it. Once we have it, we apply some > policy rules based on multiple L7 criteria (host header, URI, > cookies, headers, you name it) and decide where to send it, or > to redirect or to reject it. So we already wait for these data. > > But my understanding of 0-RTT is that we can receive a replayed > request which will match all of this entirely. So it's not a matter > of length. What I'd want instead is to ensure that at the moment I > pass it to the server I can tell the server "received over 0-RTT" > or "received over 1-RTT", and the server be able to take the > appropriate decision. > >> > Also in practice, 0-RTT will be effective at multiple places : >> > >> > 0RTT 0RTT clear/TLS+keep-alive >> > client ----> CDN ----> LB ----> server >> > >> > In this chain the CDN will not know whether 0RTT is valid or not for the >> > client, and it will pass the request using 0RTT again to the origin's edge >> > made of the load balancer which has no clue either about the validity of >> > 0RTT here. Only the origin server will possibly know. But 0RTT will have >> > been used twice in this diagram. We're in the exact situation where we >> > want any agent in the chain to be able to say "4xx retry this please" >> > so that the closest agent to the client does it first and saves a lot on >> > RTT to fix the problem. >> >> My view is that this is an issue behind origin and that it should be >> handled as such, instead creating a profile that requires a user-agent >> to resend a HTTP request. > > But in terms of applications, what does it mean to you to "handle it as > such" ? The application receives an "OK" validation for a sensitive operation > over 0-RTT, it cannot realistically ask the user "Please your OK was > received over an unsafe channel, would you please confirm again ?". > We need to provide a way for the server to validate the contents one way > or another What I am arguing against is creating a specification that suggests an user agent (not an intermediary) to resend HTTP request on a 0-RTT enabled TLS connection. 0-RTT is an optimization for connections with big latency. Such connections typically cost certain amount of money. Asking the user agent to resend a HTTP request that has been sent over a 0-RTT connection not only eliminates the merits provided by 0-RTT but also doubles the consumed bandwidth. That does not sound right to me. I think the right advise we should provide for such case is: turn off 0-RTT. Between the origin and the application server, I think that we can and also need to resend HTTP requests. The cost of the network is typically lower than the connection to the end user. Connections between the two can be kept alive for much longer time to avoid the overhead of connection establishment. And as you have discussed, the act of coalescing incoming requests from multiple connections inevitably means that we need to notify the application server if each HTTP request has been received entirely by 0-RTT by using a HTTP header (hence resend becomes unavoidable). >> I am not against defining how servers >> running behind the origin should resend the requests (for better >> interoperability). > > That's all I'm asking for :-) > > Willy -- Kazuho Oku
Received on Thursday, 11 May 2017 15:30:54 UTC