Re: HTTP/1.1 Request Smuggling Defense using Cryptographic Message Binding (new draft)

Many of the issues we're seeing include body handling which is why just a
header defense is inadequate.
This is an area where even heavily tested servers are having issues found
at a non-trivial rate.

A colleague of mine has a paper he did with some grad students of his that
go into detail:  https://arxiv.org/pdf/2510.09952
He explores some topic in this area around possibilities.

Here is a much more opinionated position on this:  https://http1mustdie.com/
but I don't think captures the reality that http1 isn't going to die
anytime soon,
even if we publish an http1-considered-harmful draft.

I think that HTTP/1 servers interoperating between multiple vendors are
sadly here to stay,
hence where there's value in finding a "two-ended" approach that is easily
deployable,
easily implementable, and can mitigate a significant fraction of these
as-observed vulnerabilities
would be valuable to have in the ecosystem.

   Erik


On Wed, Oct 22, 2025 at 6:00 PM Ben Schwartz <bemasc@meta.com> wrote:

> *From:* Erik Nygren <nygren@gmail.com>
> *Sent:* Wednesday, October 22, 2025 2:06 PM
>
> > Unassociated random numbers alone won't help enough with request
> desynchronization.  An attacker wanting to do something could just add
> their own within a body to create a new request and get things
> desynchronized.
>
> I believe simple random number matching is enough to protect against
> misparsing of the header fields (by terminating the connection if
> End-Headers is missing or wrong).  It does not protect against misparsing
> of the request-line, or of the body (presumably due to chunked transfer
> coding issues?).
>
> Overall, I think that these "two-ended" approaches are not the right way
> forward.  Servers that are active and devoted enough to do this work should
> just enable HTTP/2.
>
> The interesting question, in my view, is what can be done unilaterally by
> an intermediary when the next-hop server is potentially vulnerable.  If
> disabling connection reuse "across the board" is too difficult, another
> option would be to isolate "risky" requests into separate connections with
> "Connection: close", while continuing to pool/pipeline "innocuous"
> requests.  I believe intermediaries could probably identify risky requests
> pretty reliably with some simple heuristics ("weird" characters and
> escaping, nontrivial content length, etc.)
>
> --Ben
>

Received on Wednesday, 22 October 2025 22:13:13 UTC