Re: Prague side meeting: HTTP/2 concurrency and request cancellation (CVE-2023-44487)

Hi Roy,

Responding to some points in line:

On Sat, 14 Oct 2023, 19:25 Roy T. Fielding, <fielding@gbiv.com> wrote:

> FTR, I don't see any need to change the protocol; just change the
> implementations that are vulnerable because their expectations are wrong
> about clients on the open Internet. That's why many HTTP/2 implementations
> were already prepared for this attack.
>
> Speaking of which, that CVE is completely irresponsible. A CVE is supposed
> to list known vulnerabilities in released software, not potential
> vulnerabilities in all implementations of a single protocol. Now we have
> security poodles from all over the world asking each and every HTTP project
> whether they have a fix for a vulnerability that they never had in the
> first place, all because the CVE authors prefer to blame the protocol
> instead of their own internet-facing implementations. Don't do that,
> especially not for a low severity DDoS load-based attack. It has created a
> DDoS of its own, killing time we have for all of our open source projects,
> and we don't scale like a server.
>
> The RFC cannot guess what is the appropriate number of max concurrent
> streams for a given interface because h2 might be used in both trusted and
> untrusted environments, with both custom and generic clients, and with a
> great deal of variance in server capabilities (memory and CPU). It is fair
> to say that any server should be capable of receiving 100 concurrent stream
> openings. That does not mean the server has to provide an equal service to
> those 100 open streams, nor does it say that a server has to ignore the
> reset streams count just because the client said so. The server is in
> control of the interface and is fully capable of adjusting its services
> regardless of the RFC. The server can make the choice of availability
> versus interoperability far better than we can.
>

The RFC makes it clear the protocol expects that clients can ask for
unlimited concurrent streams unless the server says different. Whether
there is a server limit, or no limit, the server can reset streams for any
reason. Interoperability is a concern because how a client deals with
stream limits and reset streams is an implementation matter (e.g. see
Glenn's points about JS interfaces). And some client side implementations
dont have APIs to let apps articulate what they would like to happen when a
concurrency limit is hit. Hardcoding a magic limit inside clients seems to
be quite popular and it has probably helped avoid runaway clients. I'd put
money on people that use clients not being able to understand all the
nuances.

I agree with you that HTTP/2 can't say much about what limits make sense
for any deployment. But it currently gives 2 values that don't seem to
reflect much reality. It seems that MAX_CONCURRENT_STREAMS is almost as
impotent as MAX_HEADER_LIST_SIZE in actually providing bounds that are
usable. Especially if the server-side solution is to implement private
limits that are never communicated directly.



> The working group should prepare a detailed explanation for a Security
> Considerations section that describes how such attacks work and how they
> can be mitigated by reducing service allocations to misbehaving clients.
> There is no need to change the protocol itself, other than to acknowledge
> (once again) that anything specified by the protocol is subject to change
> by the server if it perceives the client to be an attack rather than an
> interoperable client.
>

The focus had been on servers because that's where the attacks manifest.
But stream concurrency applies in bothe
 directions, so any new work or text should give some consideration to that.

Cheers
Lucas

Received on Saturday, 14 October 2023 19:22:35 UTC