- From: Ben Schwartz <bemasc@meta.com>
- Date: Wed, 22 Oct 2025 22:00:20 +0000
- To: Erik Nygren <nygren@gmail.com>
- CC: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
- Message-ID: <DS0PR15MB56743A4B649B2757F2811366B3F3A@DS0PR15MB5674.namprd15.prod.outlook.com>
From: Erik Nygren <nygren@gmail.com>
Sent: Wednesday, October 22, 2025 2:06 PM
> Unassociated random numbers alone won't help enough with request desynchronization. An attacker wanting to do something could just add their own within a body to create a new request and get things desynchronized.
I believe simple random number matching is enough to protect against misparsing of the header fields (by terminating the connection if End-Headers is missing or wrong). It does not protect against misparsing of the request-line, or of the body (presumably due to chunked transfer coding issues?).
Overall, I think that these "two-ended" approaches are not the right way forward. Servers that are active and devoted enough to do this work should just enable HTTP/2.
The interesting question, in my view, is what can be done unilaterally by an intermediary when the next-hop server is potentially vulnerable. If disabling connection reuse "across the board" is too difficult, another option would be to isolate "risky" requests into separate connections with "Connection: close", while continuing to pool/pipeline "innocuous" requests. I believe intermediaries could probably identify risky requests pretty reliably with some simple heuristics ("weird" characters and escaping, nontrivial content length, etc.)
--Ben
Received on Wednesday, 22 October 2025 22:00:28 UTC