W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Rechartering HTTPbis

From: patrick mcmanus <pmcmanus@mozilla.com>
Date: Fri, 27 Jan 2012 21:47:53 -0500
Message-ID: <4F2361D9.5030306@mozilla.com>
To: Willy Tarreau <w@1wt.eu>
CC: 'HTTP Working Group' <ietf-http-wg@w3.org>
Hey Willy,

On 1/27/2012 6:58 PM, Willy Tarreau wrote:
> For instance, haproxy requires around 40 GB of RAM to handle one million
> concurrent connections on Linux,

that's impressive to be sure.

I presume when we get around to really discussing SPDY as a base 
proposal Google will share their experiences on what its like to deal 
with compressed headers in huge numbers of concurrent streams. One of 
the really good things about looking at SPDY is that there is 
significant code and operational experience with it - if only every WG 
endeavor were so lucky! My impression is that it is frankly no big deal, 
but on a really large volume that's just second hand information. One of 
the mitigators is that header compression normally uses 2KB windows and 
is quite effective with that fairly small amount of state. SPDY does 
allow bigger windows, but it would be reasonable in my mind to run some 
experiments and consider specifying something like 4KB or less for 
HTTP/2 if the results supported it.

Its worth noting that in SPDY header compression is mandatory to apply, 
but message body compression is only mandatory to implement. (i.e. if 
you receive it you have to decompress, and you can always generate at 
any time, but you are not required to generate). Knowing that this is 
always a possibility, regardless of what might have happened to a 
accept-encoding header in any kind of proxied environment, incents the 
compression of compressable data.

I'm ok with optional behaviors (such as compression) based on the data 
being transmitted. The implementation can take a look at the data and 
decide what the right thing to do is based on that (your compressed log 
example, most media formats, etc..). But I don't like optional behavior 
based on deployment context (i.e. no tls because I'm on a secure lan, 
I'm a back end server, etc..) .  That's a million times harder to get 
100% right imo.

I really believe users need to be able to depend on security and (at 
least header) compression being active. One of the lessons of HTTP/1 is 
that optional implementations of those things don't yield the right 
results for the web.
Received on Saturday, 28 January 2012 02:48:19 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:53 GMT