W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Thu, 1 Mar 2012 08:30:42 +0100
To: Mike Belshe <mike@belshe.com>
Cc: Amos Jeffries <squid3@treenet.co.nz>, ietf-http-wg@w3.org
Message-ID: <20120301073042.GE7791@1wt.eu>
Hi Mike,

On Wed, Feb 29, 2012 at 03:00:58PM -0800, Mike Belshe wrote:
> Overall, I'm not sure how much to worry about DoS.  In the end, I think its
> something you figure out how to mitigate in implementations - just like
> we've done it before at every level from DNS to TCP to SSL to HTTP.
> I don't think its as dire as you've made it out to be.  It's not like this
> amplification attack is difficult to mitigate.  A server wanting to limit
> it, would probably do the following:
>    a) First off, because SPDY tries to limit connections to one-per-domain,
> there is a new element of information at the transport layer which can be
> used for DoS countermeasures.  This needs some experimentation, and I bet
> it will be useful.
>    b) Also note that for SPDY over SSL, you've got DoS countermeasures from
> the SSL layer at your disposal.  These are certainly going to slow down
> many attackers.  (It will also make some sites nervous - especially those
> that haven't hardened port 443 yet!)
>    c) If they get through both of those layers, the amplification attack is
> easy to detect, and not a consumption of massive memory, bandwidth, or CPU
> on the server; an implementation looking to conservatively block DoS could
> check for expansions of 2-4KB and probably thwart most everything right
> there.  Cheap.

I disagree on exactly this point, because in order for the server to know
that it doesn't want to process this 4 kB request, it first has to decompress
it. Using plain old HTTP, in order for the server to reject a 4 kB request,
the attacked had to upload it, which would have limited its ability to knock
the server down. Now with just a few bytes an attacker can build a 4 kB request
that the server will have to parse before deciding to reject it. A 4kB request
would fit in 6 bytes from what I've seen, maybe less. This means that by
filling the pipe with 6-bytes requests, a client can make a server parse 800
times that amount of traffic. Filling a pipe at 1 Mbps means the server has
to process 800 Mbps of requests it will decide to drop in the end. This is
my precise concern. And more importantly, I doubt that a 4kB limit will last
long. I've used 4kB as a request size limit in haproxy 10 years ago, and I
quickly had to push it to 8kB to satisfy most sites. In enterprise, it's
worse, sometimes people configure it up to 32 kB but that's another story.

> Overall, I just don't see how DoS is a silver-bullet against the zlib
> compressor.

Note that I'm not trying to find a silver buller against zlib, I'm worried
about zlib making DoS much easier. I have a great respect for all the work
you have done in optimizing the use of zlib for header compression and I
don't want to dismiss any of your findings. I just think that one point has
been overlooked and this point in my opinion is a real problem.

> Almost any new feature has DoS potential (e.g. we haven't even
> talked about frame flooding or syn_streams yet :-)

Every design choice that makes DoS easier needs to be addressed even if
it slightly reduces optimizations. DoS is far too common these days, we
should focus on protecting the internet *before* making it faster for
mobile users.

Received on Thursday, 1 March 2012 07:31:20 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:00 UTC