- From: Patrick McManus <mcmanus@ducksong.com>
- Date: Fri, 4 Mar 2016 16:16:22 -0500
- To: Willy Tarreau <w@1wt.eu>
- Cc: Patrick McManus <mcmanus@ducksong.com>, Joe Touch <touch@isi.edu>, HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAOdDvNohs9TpLxMg-Vn-B=Ux2VumPL92hgqubARa7cgVinUmDg@mail.gmail.com>
To the extent this document can help promote efficient interop, rather than just application tuning (and I acknowledge that is a blurry line), I think it has promise as an IETF work and support adoption. I was skeptical of it being able to play that role, but I think Wily has raised an interesting point along those lines below. The document is clearly an -01 which needs more exploration but it was nice of Daniel to put something forward. (Daniel is afk for a bit - I'm sure we'll hear from him in the coming days.) On Thu, Mar 3, 2016 at 5:01 PM, Willy Tarreau <w@1wt.eu> wrote: > monitors). Many firewalls are tuned with aggressive FIN_WAIT/TIME_WAIT > timeouts which cause their session to expire before the other side dares > to retransmit. The server then remains for a long time in LAST_ACK state, > resending this last ACK packet for some time before giving up. > Wily, thanks for the giant wall of text :). I have tried to distill it to this, which I think is the essence of the discussion. I think the firewall issue is one worth documenting for interop purposes - it can't know the end state of receivers on either side so at least from a forwarding perspective it should be configured permissively in an environment that anticipates low TW. That seems like a valuable point to capture. > But by this time, our nice client has already used all other ports and > needs to reuse this port. Since its TIME_WAIT timeout was reduced to > something lower than the server's LAST_ACK, it believes the port is > free and reuses it. It sends a SYN which passes through the firewall, > this SYN reaches the server which disagrees and sends an RST back > (when the client picked a new SYN above the end of previous window) > or an ACK which will generally be blocked by the firewall, or if the > firewall accepts it, will be transmitted to the client which will then > send an RST, wait one second and send the SYN again. As you can imagine, > this dance is quite counter-productive for the performance since you > convert hundreds of microseconds to multiples of seconds to establish > certain connections. > This is interesting, and a bit different than the integrity issues an established frame could inject normally associated with TW (which https can protect against - at least in a deterministic fast fail sense). on the one hand, we've gone from the client having a high rate of fast 0RTT fails (blocked from initiating by TW).. to a situation where 3 things are going on 1] an unquantified "large" fraction of quick successes. (no packet loss impact, no state machine out of sync and no blockage by a timer) 2] a number of cases of fast 1RTT fail where a RST is received by the client 3] a fraction that may succeed or fail, but much more slowly due to retry behavior with its set of lovely constants and backoffs If I've got that (at least vaguely) right, there seem to be situational tradeoffs as to whether that is 'better' or not. Sounds like good discussion material in a tuning doc :) Within the scope of "TCP for HTTP" would you say something different? (and sure, I agree legacy TCP might not be the fun thing here.. but its the topic at hand.)
Received on Friday, 4 March 2016 21:16:49 UTC