Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

On Tue, Feb 28, 2012 at 10:49 PM, Willy Tarreau <w@1wt.eu> wrote:

> Hi Amos,
>
> On Wed, Feb 29, 2012 at 07:18:54PM +1300, Amos Jeffries wrote:
> (...)
> >  And I am not considering the RAM DoS issues Willy brought up.
>
> Please note that I was not talking about a RAM DoS but a CPU DoS, since
> by sending just a few bytes you can force the other end to process many
> more bytes. Even if you decide that you refuse to decompress past 8 kB
> of headers, you still have to decompress them to discover that it too
> large. Also nothing prevents the attacker from sending extremely large
> valid requests in just a few bytes which you have to decompress and parse
> before deciding that you finally don't want to process them, or worse,
> recompress then forward.
>
> This is my real concern. I'm regularly helping people to deal with DDoSes
> and I can assure you that the first thing you try to do is spot a header
> value which helps discriminate between real and undesired traffic then
> start to act on this. It is worth noting that not all of the DDoS traffic
> is easily identified at the boundary since many requests are totally valid
> and must be forwarded to the origin servers. When you receive something
> like 200000 requests per second, it's not uncommon to have to forward
> about 20000 per second to the server after filtering 90%. Right now with
> HTTP/1.1, a properly tuned $1000 PC is totally capable of this. With
> compression, I already can't even imagine compressing 20000 requests per
> second, let alone decompress and parse 200000. In fact it would even be
> much more than 200000 since the bandwidth would allow something like 2
> million thanks to the higher compression ratio. But these 2 millions
> would have to be totally decompressed before being parsed then dropped.
> For instance, something I've been facing several times is dropping a
> few repetitive URIs. Bots requests non-existing objects at a very high
> rate. You have little processing to perform, just match the URI and drop
> the connection. That's quite cheap, just as it is to drop an HTTP/1.1
> requests without a Host header.
>
> (...)
> > Lets have the bandwidth savings cake, but lets not sour it by adding
> > time and power costs.
>
> That's exactly my point too. And again, I'm not contesting the huge
> savings that Patrick and Mike report, I'm even impressed.
>



Hey Willy -

Overall, I'm not sure how much to worry about DoS.  In the end, I think its
something you figure out how to mitigate in implementations - just like
we've done it before at every level from DNS to TCP to SSL to HTTP.

I don't think its as dire as you've made it out to be.  It's not like this
amplification attack is difficult to mitigate.  A server wanting to limit
it, would probably do the following:

   a) First off, because SPDY tries to limit connections to one-per-domain,
there is a new element of information at the transport layer which can be
used for DoS countermeasures.  This needs some experimentation, and I bet
it will be useful.

   b) Also note that for SPDY over SSL, you've got DoS countermeasures from
the SSL layer at your disposal.  These are certainly going to slow down
many attackers.  (It will also make some sites nervous - especially those
that haven't hardened port 443 yet!)

   c) If they get through both of those layers, the amplification attack is
easy to detect, and not a consumption of massive memory, bandwidth, or CPU
on the server; an implementation looking to conservatively block DoS could
check for expansions of 2-4KB and probably thwart most everything right
there.  Cheap.

Overall, I just don't see how DoS is a silver-bullet against the zlib
compressor.  Almost any new feature has DoS potential (e.g. we haven't even
talked about frame flooding or syn_streams yet :-)

If the bar for acceptance is "no new DoS potential", then I'm not sure
there is room to change very much in HTTP/2.0 ever.

Mike






>
> Thanks,
> Willy
>
>
>

Received on Wednesday, 29 February 2012 23:01:26 UTC