(no subject)

Hello!

On Wed, Oct 11, 2023 at 11:42:54AM +0200, Willy Tarreau wrote:

> On Wed, Oct 11, 2023 at 10:45:13AM +1100, Mark Nottingham wrote:

[...]

> > Other discussions might touch on whether there are other measures (protocol
> > mechanisms or otherwise) that clients and servers can take, how concurrency
> > is exposed to calling code, and whether we can give better guidance about how
> > concurrency violations should be handled.
> 
> A few of these were discussed already in the thread below opened by Cory
> 4 years ago, where it was discussed how to count streams vs limit, and
> where I even mentioned this exact method of attack consisting in sending
> HEADERS followed by RST_STREAM that would not change the total stream
> count from the protocol perspective:
> 
>   https://lists.w3.org/Archives/Public/ietf-http-wg/2019JanMar/0131.html
> 
> We already faced that issue long ago when multiple haproxy instances
> were stacked on top of each other over H2 and too short timeouts on the
> front would cause long series of HEADERS+RST_STREAM on the back before
> the request had a chance to be processed due to the second layer being
> configured with a nice value making it slower than the first one. What
> we've been doing in haproxy against this is that instead of counting the
> streams at the protocol level, we count attached ones at the application
> layer: these are created at the same moment, but they're released once
> they're aware of the close. And we stop processing new streams once the
> configured limit is passed. This means that for a limit of 100 streams
> for example, if we receive 100 HEADERS and their 100 respective
> RST_STREAM, it will indeed create 100 streams that are immediately
> orphaned (and closed from an H2 perspective), but the 101th HEADERS
> frame will interrupt processing until some of these streams are
> effectively closed and freed (and not just at the protocol layer).
> 
> And from what I've read below from Maxim Dounin, it seems like nginx
> applies a very similar strategy (they use x2 margin instead of +1 but
> the principle is the same, let streams finish first):
> 
>   https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html

To clarify:

In nginx, the concurrent streams limit 
(http2_max_concurrent_streams, 128 by default) is advertised in 
the SETTINGS_MAX_CONCURRENT_STREAMS setting, but instead of 
counting open and half-closed streams as per HTTP/2 specification, 
it counts requests active at the application level.  When 
RST_STREAM is received, nginx tries to close the request, but this 
might not happen immediately (for example, if nginx is proxying 
the request and configured to ignore client aborts, 
http://nginx.org/r/proxy_ignore_client_abort).  Any excessive 
streams are rejected with the REFUSED_STREAM stream error.

This behaviour does not match HTTP/2 specification, and any client 
which tries to maintain exactly SETTINGS_MAX_CONCURRENT_STREAMS 
streams and uses RST_STREAM might hit the limit.  On the 
other hand, this behaviour matches long-standing limit_conn 
mechanism in nginx (http://nginx.org/r/limit_conn).  And it 
ensures that the concurrent streams limit cannot be bypassed.

The x2 margin is in the limit we've additionally introduced just 
now, and it specifically targets RST_STREAM misuse and/or lack of 
SETTINGS_MAX_CONCURRENT_STREAMS management on the client side: 
even if streams are immediately closed by nginx (either because it 
was able to send the response, or because it the stream was rest), 
no more than 2x SETTINGS_MAX_CONCURRENT_STREAMS is allowed till 
reading from the socket blocks.  The 2x multiplier here is to 
cover potentially valid scenario when a client with an unstable 
connection decided to stop loading a page (with a large set of 
resources), and instead navigated to another page (with another 
large set of resources), so the server will simultaneously receive 
multiple HEADERS frames, followed by RST_STREAM frames, and 
another set of HEADERS frames.

Hope this helps.

-- 
Maxim Dounin
http://mdounin.ru/

Received on Wednesday, 11 October 2023 19:41:15 UTC