- From: Willy Tarreau <w@1wt.eu>
- Date: Thu, 26 Jun 2014 07:53:21 +0200
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: Mark Nottingham <mnot@mnot.net>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Wed, Jun 25, 2014 at 05:28:47PM -0700, Martin Thomson wrote: > Sure, we might have arrived at what is only a local minimum, but > without stronger justification I'm really reluctant to act on this. > As far as it goes, Willy's numbers don't actually concern me that > much; parallelism goes a long way to addressing those sorts of > concerns. Martin, while I can understand that such numbers are irrelevant to your use case, and that you're not tempted by a last-minute change, I'd like to mention that parallelism is orthogonal to this concern ; parallelism is what currently makes it possible to reach close to 100G with HTTP/1.1 and if the same hardware goes back to 10 or 20G, it's not parallelism that will bring the performance back, it's just a definitely wasted performance by a design which does not scale as well as the one it replaces. Regards, Willy
Received on Thursday, 26 June 2014 05:53:48 UTC