- From: Willy Tarreau <w@1wt.eu>
- Date: Thu, 11 May 2017 16:02:48 +0200
- To: Stefan Eissing <stefan.eissing@greenbytes.de>
- Cc: Kazuho Oku <kazuhooku@gmail.com>, Mark Nottingham <mnot@mnot.net>, Erik Nygren <erik@nygren.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, "Ponec, Miroslav" <mponec@akamai.com>, "Kaduk, Ben" <bkaduk@akamai.com>
On Thu, May 11, 2017 at 03:51:51PM +0200, Willy Tarreau wrote: (...) > Also in practice, 0-RTT will be effective at multiple places : > > 0RTT 0RTT clear/TLS+keep-alive > client ----> CDN ----> LB ----> server > > In this chain the CDN will not know whether 0RTT is valid or not for the > client, and it will pass the request using 0RTT again to the origin's edge > made of the load balancer which has no clue either about the validity of > 0RTT here. Only the origin server will possibly know. But 0RTT will have > been used twice in this diagram. We're in the exact situation where we > want any agent in the chain to be able to say "4xx retry this please" > so that the closest agent to the client does it first and saves a lot on > RTT to fix the problem. Another point is that given the problem here is to properly map HTTP semantics on top of a potentially unsafe transport, I really think it totally makes sense to use HTTP semantics (such as status codes) to address it. It also means for example that clients will be able to send POST requests (or those more than a few MSS) with Expect:100 letting the other end decide whether or not to return a 100 or a 4xx, so for zero extra cost compared to the 1-RTT we can benefit from some optimally safe opportunistic attempts to use 0-RTT most of the time in a way where safety can be decided at the same place the action will be taken. Willy
Received on Thursday, 11 May 2017 14:04:02 UTC