- From: Noah Mendelsohn <nrm@arcanedomain.com>
- Date: Fri, 13 Jul 2012 10:56:06 -0400
- To: "www-tag@w3.org" <www-tag@w3.org>
- CC: Mark Nottingham <mnot@mnot.net>
An interesting critique of SPDY has been posted, and may be of interest to TAG members who are following the protocol space. I think the points about routers and performance are very interesting, and not something we've discussed. Quoting: In the time frame where HTTP/2.0 will become standardized, HTTP routers will routinely deal with 40Gbit/s traffic and people will start to arcitect for 1Tbit/s traffic. So, that's about 4GBytes/sec. If we assume that a modern CPU core processes maybe, well by coincidence, on the order of a few billion instructions/sec, we're on the order of 1 CPU instruction per byte. Now, there will be multiple cores, specialized hardware might scan for packet headers, etc., but it's not hard to convince yourself that some serious optimization and performance tuning is needed just to keep up. Stated differently: building protocols that make it unnecessarily hard to recognize and filter headers, or that require decompression to find them, could really limit the ability of routers and firewalls to do the work they need to do, including protecting from Denial of Service attacks. The implication is that SPDY is a step backward, not forward, in these respects. Noah [1] https://www.varnish-cache.org/docs/trunk/phk/http20.html
Received on Friday, 13 July 2012 14:56:37 UTC