W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 29 Feb 2012 00:52:15 +0100
To: Mike Belshe <mike@belshe.com>
Cc: httpbis mailing list <ietf-http-wg@w3.org>
Message-ID: <20120228235215.GB29361@1wt.eu>
Hi Mike,

On Tue, Feb 28, 2012 at 02:52:53PM -0800, Mike Belshe wrote:
> Hi, Willy -
> Thanks for the insightful comments about header compression.  I'm out of
> the country for a few days, so I am slow to reply fully.

No problem, don't worry :-)

> We did consider this, but ultimately decided it was mostly a non issue, as
> the problem already exists.   Specifically - the same amplification attacks
> exist in the data stream with data gzip encoding.  You could make an
> argument that origin servers and proxy servers are different, I suppose;
> but many proxy servers are doing virus scanning and other content checks
> anyway, and already decoding that stream.  But if you're still not
> convinced, the problem also exists at the SSL layer.  (SSL will happily
> negotiate compression of the entire stream - headers & all - long before it
> gets to the app layer).  So overall, I don't think this is a new attack
> vector for HTTP.

I see it differently. The other examples (SSL and content encoding) are not
mandatory for a server-side component, which is exposed to DDoS attacks.
Here we're talking about making compression a mandatory requirement for any
server-side component, which means that it would not even be an option to
disable it should this become a huge threat at some point.

> We did consider some header compression schemes like what you proposed -
> they are quite viable.  They definitely provide less compression, and are
> less resilient to learning over time like zlib is.  They also generally
> fail to help with repeated cookies, which is really where some of the
> biggest gains are.  But I'm open to other compression schemes.

I'm fairly sure there will be less gain there, but we should balance the
gain and the cost. We could as well switch to LZMA to try to slightly
improve the compression ratio at an even higher cost. Also, I totally
understand why in SPDY you had to design *with* existing HTTP issues in
mind. Thus, your choices (eg: compress repeated cookies) make a lot of
sense. Now that we're designing a new HTTP version, we should first
address the original HTTP issues, one of them being duplicated headers
which has been discussed here to great extents. One of the possibilities
would be to allow anyone (including the UA) to remove duplicate headers
before folding.

> Note that SPDY's compressor achieves about 85-90% compression for typical
> web pages.

I'm not surprized and that's quite good. I'm not contesting the gains,
but rather the costs of reaching this gain while I think that slightly
lower ones might be achieved at a much lower cost provided that we can
rework HTTP now.

Best regards,
Received on Tuesday, 28 February 2012 23:52:41 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC