Re: Prague side meeting: HTTP/2 concurrency and request cancellation (CVE-2023-44487)

2023年10月13日(金) 3:59 Willy Tarreau <w@1wt.eu>:

> Hi Mike,
>
> On Thu, Oct 12, 2023 at 06:33:26PM +0000, Mike Bishop wrote:
> > That might be exactly what we need. This has been a problem for many of
> us
> > for a while, despite it being publicly discussed only recently. While
> your
> > draft is a fine patch for well-behaved clients, the reality is that
> attackers
> > will simply play dumb and pretend to be unextended HTTP/2 clients.
>
> Not if you combine it with settings like I suggested earlier in the thread:
>   1) send SETTINGS with MAX_CONCURRENT_STREAMS=100
>   2) send MAX_STREAMS announcing 100 streams (e.g. max_stream_id=201)
>   3) send SETTINGS with MAX_CONCURRENT_STREAMS=10 (or even less)
>
> At this point if you get a new stream with an ID higher than the
> MAX_STREAMS you advertised and the total is above MAX_CONCURRENT_STREAMS,
> you know for certain it's an abuse.
>

Is that so?

I could very well have lost my memory, but the initial value of
MAX_CONCURRENT_STREAM is unlimited. The client is allowed to initiate
requests before it receives the first SETTINGS frame. It is designed as
such to allow clients to send requests in 0-RTT. Advertised limit is
applied only when the client receives the first SETTINGS frame

Therefore, if we want to define an extension that has a hard limit on the
number of concurrent requests that a client can issue, I think we should do
something like:
* state in the extension that clients implementing the extension MUST NOT
initiate more than 100 requests until it receives a SETTINGS frame, and
* negotiate the use of MAX_STREAMS frames using SETTINGS.

If we take this approach, there will be a guarantee that the client will
open no more than 100 streams initially, and that the new credits will
become available only by the server taking action.


But even with something like MAX_STREAMS, an attacker can issue requests at
a very high rate. It is not bound by RTT, because a server will be ready to
accept additional requests as soon as it sends a MAX_STREAMS frame, rather
than when MAX_STREAMS is acked.

Therefore, assuming that the server is configured to allow 100 concurrent
requests on the connection, an attacker can mount like 100 requests every
100 microseconds, assuming that it takes 100 microseconds for a server to
process 100 requests just to cancel and send MAX_STREAMS frame.

This duration (100 microseconds) depends on the server load. So, as the
load increases, the attacks would start to fail, reducing the efficiency of
the attack.

However, I would argue that it would still be an effective way to put load
on the victim server even though it would not be considered as a
vulnerability of the protocol or the server.

To paraphrase, the protection provided by MAX_STREAMS might not be adequate
for real deployments.

We know that many existing servers throttle request concurrency separately
from connection-level concurrency. As stated previously, my preference goes
to emphasising the importance of having such a throttling scheme.


> > Other
> > mitigations will still be needed as long as that connection pattern is
> > allowed.  (And reducing the default or recommended value does nothing to
> > mitigate this particular attack, so I'm not particularly concerned about
> it.
> > Servers will enforce their actual value from the start of the connection
> and
> > RESET and excess streams anyway.)
>
> Servers can (and should) use their *real* stream count and not just the
> apparent one at the protocol level. Apache, Nginx and Haproxy all have
> their own variants of this and cope well with this situation.
>

FWIW, h2o also has this kind of request-level throttling.

The only problem with h2o was that there is a way of cancelling requests
through this throttling scheme. So when using h2o as a h2 to h1 proxy, when
facing a Rapid Reset attack, h2o would issue many socket(2) and connect(2)
syscalls immediately followed by close(2).


>
> Regards,
> Willy
>
>

-- 
Kazuho Oku

Received on Thursday, 12 October 2023 23:05:27 UTC