W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2008

Re: Overlapping ranges

From: Julian Reschke <julian.reschke@gmx.de>
Date: Sat, 11 Oct 2008 12:32:27 +0200
Message-ID: <48F080BB.2090707@gmx.de>
To: V13 <v13@v13.gr>
CC: ietf-http-wg@w3.org

V13 wrote:
> Hello there,
> 
> While you are at the ranges thing, I'd like to request/suggest/ask that 
> requests with overlapping ranges be prohibited or at least deprecated.
> 
> Allowing overlapping ranges permits the client side to request more data than 
> the largest file available at the server side. It is trivial to construct a 
> 100MB file request from 200 overlapping partial requests of a 500K file. This 
> allows the TCP optimistic ACK attack [1] to be performed on web servers all 
> over the world.
> 
> I'm (we're) currently writting this as a paper and I'll post it here too if 
> you like, when it is finished but until then just take my word. As far as I 
> know this is the only known way that one can force the server side to 
> transmit at rates much higher than the disk I/O rate (because requesting the 
> same range takes advantage of the disk cache). When combined with persistent 
> connections it also the only known way to infinitely request data from the 
> server side. This gives enough time to TCP to reach its maximum transmission 
> rate and keep that rate.
> 
> For the record, we were able to force a web server to continuously transmit at 
> 900Mbps over the Internet for more than 5 minutes (until interrupted) using 
> just a 100Mbytes file, overlapping ranges and a persistent HTTP connection. 
> Without overlapping ranges this wouldn't be possible.
> ...

I agree that this is a nice DOS scenario, but wouldn't it be possible to 
do the same just with a bunch of concurrent, repeating GET requests on 
the same URI?

BR, Julian
Received on Saturday, 11 October 2008 10:33:13 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:50:56 GMT