W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Review: http://www.ietf.org/id/draft-mbelshe-httpbis-spdy-00.txt

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 29 Feb 2012 07:49:29 +0100
To: Amos Jeffries <squid3@treenet.co.nz>
Cc: ietf-http-wg@w3.org
Message-ID: <20120229064929.GB32187@1wt.eu>
Hi Amos,

On Wed, Feb 29, 2012 at 07:18:54PM +1300, Amos Jeffries wrote:
(...)
>  And I am not considering the RAM DoS issues Willy brought up.

Please note that I was not talking about a RAM DoS but a CPU DoS, since
by sending just a few bytes you can force the other end to process many
more bytes. Even if you decide that you refuse to decompress past 8 kB
of headers, you still have to decompress them to discover that it too
large. Also nothing prevents the attacker from sending extremely large
valid requests in just a few bytes which you have to decompress and parse
before deciding that you finally don't want to process them, or worse,
recompress then forward.

This is my real concern. I'm regularly helping people to deal with DDoSes
and I can assure you that the first thing you try to do is spot a header
value which helps discriminate between real and undesired traffic then
start to act on this. It is worth noting that not all of the DDoS traffic
is easily identified at the boundary since many requests are totally valid
and must be forwarded to the origin servers. When you receive something
like 200000 requests per second, it's not uncommon to have to forward
about 20000 per second to the server after filtering 90%. Right now with
HTTP/1.1, a properly tuned $1000 PC is totally capable of this. With
compression, I already can't even imagine compressing 20000 requests per
second, let alone decompress and parse 200000. In fact it would even be
much more than 200000 since the bandwidth would allow something like 2
million thanks to the higher compression ratio. But these 2 millions
would have to be totally decompressed before being parsed then dropped.
For instance, something I've been facing several times is dropping a
few repetitive URIs. Bots requests non-existing objects at a very high
rate. You have little processing to perform, just match the URI and drop
the connection. That's quite cheap, just as it is to drop an HTTP/1.1
requests without a Host header.

(...)
> Lets have the bandwidth savings cake, but lets not sour it by adding 
> time and power costs.

That's exactly my point too. And again, I'm not contesting the huge
savings that Patrick and Mike report, I'm even impressed.

Thanks,
Willy
Received on Wednesday, 29 February 2012 06:50:13 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:56 GMT