Re: Attack research on HTTP/2 implementations

is anyone interested in adding an adaptive ddos-mitigation defense
into the TLS layer so that attackers cannot cause servers to
re-compute public keys in a tight loop?

(server provides nonce + bits + hash algo, client provides a lightweight pow)

i have no idea how to propose this properly in the http2 protocol, but
i do think it would be useful

i dont see it in there

On Fri, Aug 6, 2021 at 12:04 AM Nick Harper <ietf@nharper.org> wrote:
>
>
>
> On Thu, Aug 5, 2021 at 8:46 PM Willy Tarreau <w@1wt.eu> wrote:
>>
>> Hi Martin,
>>
>> On Fri, Aug 06, 2021 at 10:43:00AM +1000, Martin Thomson wrote:
>> > https://portswigger.net/research/http2
>>
>> Thanks for the link, pretty interesting stuff there!
>>
>> > The introduction claims to have found imperfections in the RFC, so I read
>> > this fairly carefully.  There's solid work here in terms of attacking
>> > implementations, but no concrete specification problems.
>>
>> I agree, unless I'm mistaken, everything that was attacked there is
>> already dealt with in the spec (allowed characters in values & names
>> etc).
>
>
> I saw one thing in the paper that I don't think is addressed by RFC 7540: the handling of a request that contains both an :authority pseudo-header and a Host header. I see that draft-ietf-httpbis-http2bis-03 has new language to mostly cover that issue. I say "mostly" because I don't see any specification of what should happen if multiple :authority pseudo-headers are present. (I would argue that that is a malformed request.)
>>
>>
>> > In terms of actual changes to specifications, the work we did in the HTTP/2
>> > revision on field validation should already cover all of these attacks..  Not
>> > that RFC 7540 didn't, but we're a lot, lot clearer about it now.
>>
>> Yes the new one is way better and more readable. In 7540 you often have
>> to compare a series of "must" with a series of "must not" from another
>> section.
>>
>> > There's a lesson in here for our industry regarding how implementations deal
>> > with untrustworthy inputs.  The thing we might each reflect on is why we
>> > haven't already internalized that lesson.  It's not like this is a new class
>> > of attack or anything.
>>
>> I suspect that some of the attacked sites might be using outdated
>> implementations of some of the usual suspects. We've all had such
>> weaknesses in our early implementations precisely because they were
>> not easy to spot in the spec or because some of them were hard to
>> implement and there was no justification in the spec. For example I
>> remember that the very first H2 implementation in haproxy didn't
>> explicitly compare the content-length with the amount of transferred
>> bytes in the H2 layer since that was already done in the inner HTTP
>> layers. I don't *think* it could have exposed it to one of these
>> vulnerabilities, but it's certain that by then I could easily have
>> overlooked some of them!
>>
>> In that sense, the new trend of wording around "don't do that because
>> it exposes to this risk" that we're seeing in the core spec is way
>> more powerful to encourage to carefully follow all important rules.
>>
>> Cheers,
>> Willy
>>

Received on Monday, 9 August 2021 08:01:42 UTC