- From: Willy Tarreau <w@1wt.eu>
- Date: Tue, 8 Jul 2014 00:22:17 +0200
- To: Roberto Peon <grmocg@gmail.com>
- Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, Johnny Graettinger <jgraettinger@chromium.org>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Jul 07, 2014 at 03:04:43PM -0700, Roberto Peon wrote: > Oy. > > The concept is fairly simple. > > If you need to know the entire size of the headers before one is allowed to > send any of it (as would be the case), then one must wait for all of the > headers data to arrive (and mutate it) before forwarding. > Thus, adding at a minimum: header-bytes/bandwidth seconds of latency per > gateway (not to mention the additional memory for the buffering). > I'd expect this to add a 10th of a ms per gateway or so, more on more > constrained links. You're generally much below, and I think you'll measure much better on your own gateways (cause I know you optimize for this). A long time ago I measured haproxy's parsing of headers at 1.2 microsecond on a pentium-M 1.7 GHz (my laptop by then) for an average request coming from my browser. The rest is connection establishment (you don't care in the response) and NIC's IRQ latency + switch to NIC serialization + TCP stack. On an 1 Gbps port, serializing a 300 bytes request over the wire takes 4 microseconds. NIC's IRQ delivery is generally the worst, between 40 and 100 microseconds in general. But anyway I can easily understand that for your specific use case, when you write both the server and the gateway, you can consider both of them as a single component and have each one off-load part of the job to save the other one's resources. So I'm not shocked by seeing that you don't process responses for example. Thanks for the details, Willy
Received on Monday, 7 July 2014 22:22:44 UTC