Re: HTTP/1.1 Request Smuggling Defense using Cryptographic Message Binding (new draft)

Thank you Erik and Mike for sharing the draft.

I'm not sure if we would want to do this kind of thing, but I have
technical comments regarding the approach.

2025年10月23日(木) 7:16 Erik Nygren <nygren@gmail.com>:
>
> Many of the issues we're seeing include body handling which is why just a header defense is inadequate.
> This is an area where even heavily tested servers are having issues found at a non-trivial rate.
>
> A colleague of mine has a paper he did with some grad students of his that go into detail:  https://arxiv.org/pdf/2510.09952
> He explores some topic in this area around possibilities.
>
> Here is a much more opinionated position on this:  https://http1mustdie.com/
> but I don't think captures the reality that http1 isn't going to die anytime soon,
> even if we publish an http1-considered-harmful draft.
>
> I think that HTTP/1 servers interoperating between multiple vendors are sadly here to stay,
> hence where there's value in finding a "two-ended" approach that is easily deployable,
> easily implementable, and can mitigate a significant fraction of these as-observed vulnerabilities
> would be valuable to have in the ecosystem.

But that might not mean we need a mechanism that relies on a shared secret.

What you might need instead is simply a hash chain.

For example, a `Bound` header that appears as the last header in each
header section could carry a hash computed as:

```
sha256(bound_header_value_of_previous_request "::" this_header_section)
```

Separately, I think there are two issues with the draft.

The first issue is negotiation.

As Watson points out [1], this kind of mechanism needs to be
hop-by-hop. The draft assumes a TLS-based negotiation scheme to ensure
that property; however, as Ben notes, there is a considerable amount
of HTTP/1.1 traffic that is cleartext or in deployments where TLS
changes aren’t feasible [2].

Our precedent for addressing this is to use an `Upgrade` request and
then send something that cannot be misinterpreted as ordinary
HTTP/1.1; IIUC, WebSocket (RFC 6455) and h2c (RFC 7540) use this
approach.

The other issue is that the draft does not provide protection against
“response coalescing” attacks.

Consider a case where an attacker induces a victim client to pipeline
two HTTP/1.1 requests through an intermediary, and the intermediary
receives two responses that, from its perspective, look like a single
HTTP response. With the proposed approach, both the intermediary and
the client would treat the bytes of the second response as part of the
first response. They might only detect desynchronization when handling
the third response—too late.

To prevent this, probably the simplest solution is to prohibit request
pipelining.

[1] https://lists.w3.org/Archives/Public/ietf-http-wg/2025OctDec/0115.html
[2] https://lists.w3.org/Archives/Public/ietf-http-wg/2025OctDec/0107.html

>
>    Erik
>
>
> On Wed, Oct 22, 2025 at 6:00 PM Ben Schwartz <bemasc@meta.com> wrote:
>>
>> From: Erik Nygren <nygren@gmail.com>
>> Sent: Wednesday, October 22, 2025 2:06 PM
>>
>> > Unassociated random numbers alone won't help enough with request desynchronization.  An attacker wanting to do something could just add their own within a body to create a new request and get things desynchronized.
>>
>> I believe simple random number matching is enough to protect against misparsing of the header fields (by terminating the connection if End-Headers is missing or wrong).  It does not protect against misparsing of the request-line, or of the body (presumably due to chunked transfer coding issues?).
>>
>> Overall, I think that these "two-ended" approaches are not the right way forward.  Servers that are active and devoted enough to do this work should just enable HTTP/2.
>>
>> The interesting question, in my view, is what can be done unilaterally by an intermediary when the next-hop server is potentially vulnerable.  If disabling connection reuse "across the board" is too difficult, another option would be to isolate "risky" requests into separate connections with "Connection: close", while continuing to pool/pipeline "innocuous" requests.  I believe intermediaries could probably identify risky requests pretty reliably with some simple heuristics ("weird" characters and escaping, nontrivial content length, etc.)
>>
>> --Ben



-- 
Kazuho Oku

Received on Thursday, 23 October 2025 00:54:09 UTC