HTTP/2 DoS Vulnerability (Was: HTTP/2 response completed before its request)

In message <CAP+FsNcL=pJXhi2nZZgVBY2M3gPaMGiVFBqRWmAS_=2Qd4KLRQ@mail.gmail.com>, Roberto Peon writes:

>Howso?

(Changed subject)

In a sense this is the evil twin of one of the design-goals:  We
want the white-hat clients to be able to send a load of requests
at the server as fast as possible, to "improve user experience".

Problem is, black-hat clients get to abuse that welcome-mat too.

The fundamental problem is that HTTP/2 fails to extract any
proof-of-good-faith from the client, it will be perfectly possible
to do things like

	nc http://www.example.com > /dev/null < huge_file

There is not a single place in the HTTP/2 protocol where the client
is actually forced to pay attention:  There are no randomly chosen
ID#'s to return to sender or unpredictable initial windows --
everything is perfectly predictable or at least replayable.

The claim that "HTTP2 streams are like TCP" is rubbish:  None of
the many expensive lessons of TCP -- or for that matter NCP -- have
been incorporated.

In fact, what HTTP/2 is proposing to do is exactly what sunk T/TCP
as a HTTP accelerator a decade ago:  Gaining perceived performance
by eliminating the "proof-of-willingness-to-work-for-it" requirement
from clients.

Admittedly it does so at a higher level than with T/TCP so at
least the attacker has to have a legitimate IP number this time,
but still...

Even if the client totally "fails" to read from the TCP socket (a
well known strategy in spamemail) the server is still tricked to
waste signficant amounts of work before it can detect that it is
under attack from black hats.  (Servers attempting to mitigate this
with non-blocking writes may only make matters worse, since they
will expend even more CPU & syscalls and still not get anyway.)

I do not want to give the impression that this is a trivial problem
to solve:  There is no material difference between a millon-machine
botnet fetching your home-page and a #neilwebfail.

But HTTP/2 doesn't do *anything* to mitigate even the kind of
"tape-recorder replay" attacks we have known about for more than
25 years, and which TCP was specifically designed to resist.

I don't know exactly how many machines it will take to DoS any given
HTTP2 implementation, but we're probably not even talking 500, maybe
as low as 100 will do.  (We should try to determine this number
before somebody gets a DefCon slot just to tell us how little it
takes.)

It's too late to address DoS attacks comprehensively in HTTP/2, no
matter what, but we can and should remove some of the most obvious
bulls-eyes. (CONTINUATION, no flow-control on HEADERS, add
max HEADER size and initial WORK-QUOTA in SETTINGS etc.)

For HTTP/3 we should look seriously at how servers can extract
situation-tuned proof-of-work "payments" from clients as condition
for expending service.

Under normal circumstances the server just hands out trivial tasks
(2234242342 + 124123123 = ?), but when a DoS is suspected, the
clients are asked to solve gradually more time consuming tasks (find
a MD5 input having hash 0x3495823543????????...), in order to limit
the number of requests per client dynamically.

It makes a really big difference if a black-hat client can spew
requests at you at wire-speed or only at a server-chosen rate
scaled to your CPU speed at the servers discretion.

The demand that a client provide proof of one second of calculations
per request will inconvenience a large class of the trivial DoS
attacks HTTP/2 is wide open for, while still being only a minor
annoyance for legitimate users, who'd rather receive slow service
than no service at all.

All this of course presumes that the WG actually cares about
DoS attacks.  I get the feeling I'm the only one who does ?

Poul-Henning

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Tuesday, 1 July 2014 22:35:54 UTC