Re: HTTP/2 DoS Vulnerability (Was: HTTP/2 response completed before its request)

It would have been good to have had you sit in while the group involved in
creating HTTP2 discussed DoS considerations, which have been brought up
consistently over the course of the development of the protocol.

I'll attempt to rehash a small amount here.

T/TCP is not a good analog for HTTP2, and had completely different attack
surface.

To date, the browser (or "internet") usecase is likely to use TLS, since it
actually works.
If we see widespread deployment of HTTP2 in the clear, it would end up
being no worse than HTTP1 in terms of DoS potential, which is acceptable
(if not optimal) today.

In any case, your example is only useful for servers or clients which are
not deploying over TLS.
The "replay" attack vector you've chosen to bring up is not interesting in
such cases.

Lets assume, then, that we're worried about DoS attacks where the attacker
is using TLS.
In such cases, replay attacks are significantly more difficult.
That leaves non-replay attacks.

Regardless of choosing to deploy with/without TLS, when under DoS attack,
one sets the various SETTINGS to conservative values thus, as a function of
load/resource pressure, dynamically reduces the amount of memory, the
number of parallel streams one is willing to accept, perhaps the maximum
header size one is willing to accept, etc.

With the document as it is now, one might be forced to /dev/null bytes for
an RT to ensure that the more conservative values are used.
We'll see if this is good enough or not-- the fix is fairly simple: either
persist these values across connections, or have a separate profile (e.g.
h2-c) which the server offers in preference or in lieu of h2 which would
have conservative settings by default.

If you're worried about replay of cookies, etc, then yes, there is little
we can do here while continuing to use HTTP semantics as they exist today.
In such cases one should really be using TLS.

The question that should be asked is:
How is HTTP2 worse than HTTP1 in terms of DoS?
With HTTP2 servers can:
  - specify a truly tiny max-state size.
  - reduce the number of connections accepted

Given the ability in HTTP2 to set the amount of memory one is willing to
consume for any connection, and that the minimum state per connection can
be counted in a small number of ints, I think your concern doesn't have a
lot of merit.


-=R


On Tue, Jul 1, 2014 at 3:35 PM, Poul-Henning Kamp <phk@phk.freebsd.dk>
wrote:

> In message <CAP+FsNcL=pJXhi2nZZgVBY2M3gPaMGiVFBqRWmAS_=
> 2Qd4KLRQ@mail.gmail.com>, Roberto Peon writes:
>
> >Howso?
>
> (Changed subject)
>
> In a sense this is the evil twin of one of the design-goals:  We
> want the white-hat clients to be able to send a load of requests
> at the server as fast as possible, to "improve user experience".
>
> Problem is, black-hat clients get to abuse that welcome-mat too.
>
> The fundamental problem is that HTTP/2 fails to extract any
> proof-of-good-faith from the client, it will be perfectly possible
> to do things like
>
>         nc http://www.example.com > /dev/null < huge_file
>
> There is not a single place in the HTTP/2 protocol where the client
> is actually forced to pay attention:  There are no randomly chosen
> ID#'s to return to sender or unpredictable initial windows --
> everything is perfectly predictable or at least replayable.
>
> The claim that "HTTP2 streams are like TCP" is rubbish:  None of
> the many expensive lessons of TCP -- or for that matter NCP -- have
> been incorporated.
>
> In fact, what HTTP/2 is proposing to do is exactly what sunk T/TCP
> as a HTTP accelerator a decade ago:  Gaining perceived performance
> by eliminating the "proof-of-willingness-to-work-for-it" requirement
> from clients.
>
> Admittedly it does so at a higher level than with T/TCP so at
> least the attacker has to have a legitimate IP number this time,
> but still...
>
> Even if the client totally "fails" to read from the TCP socket (a
> well known strategy in spamemail) the server is still tricked to
> waste signficant amounts of work before it can detect that it is
> under attack from black hats.  (Servers attempting to mitigate this
> with non-blocking writes may only make matters worse, since they
> will expend even more CPU & syscalls and still not get anyway.)
>
> I do not want to give the impression that this is a trivial problem
> to solve:  There is no material difference between a millon-machine
> botnet fetching your home-page and a #neilwebfail.
>
> But HTTP/2 doesn't do *anything* to mitigate even the kind of
> "tape-recorder replay" attacks we have known about for more than
> 25 years, and which TCP was specifically designed to resist.
>
> I don't know exactly how many machines it will take to DoS any given
> HTTP2 implementation, but we're probably not even talking 500, maybe
> as low as 100 will do.  (We should try to determine this number
> before somebody gets a DefCon slot just to tell us how little it
> takes.)
>
> It's too late to address DoS attacks comprehensively in HTTP/2, no
> matter what, but we can and should remove some of the most obvious
> bulls-eyes. (CONTINUATION, no flow-control on HEADERS, add
> max HEADER size and initial WORK-QUOTA in SETTINGS etc.)
>
> For HTTP/3 we should look seriously at how servers can extract
> situation-tuned proof-of-work "payments" from clients as condition
> for expending service.
>
> Under normal circumstances the server just hands out trivial tasks
> (2234242342 + 124123123 = ?), but when a DoS is suspected, the
> clients are asked to solve gradually more time consuming tasks (find
> a MD5 input having hash 0x3495823543????????...), in order to limit
> the number of requests per client dynamically.
>
> It makes a really big difference if a black-hat client can spew
> requests at you at wire-speed or only at a server-chosen rate
> scaled to your CPU speed at the servers discretion.
>
> The demand that a client provide proof of one second of calculations
> per request will inconvenience a large class of the trivial DoS
> attacks HTTP/2 is wide open for, while still being only a minor
> annoyance for legitimate users, who'd rather receive slow service
> than no service at all.
>
> All this of course presumes that the WG actually cares about
> DoS attacks.  I get the feeling I'm the only one who does ?
>
> Poul-Henning
>
> --
> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
> phk@FreeBSD.ORG         | TCP/IP since RFC 956
> FreeBSD committer       | BSD since 4.3-tahoe
> Never attribute to malice what can adequately be explained by incompetence.
>

Received on Tuesday, 1 July 2014 23:33:57 UTC