W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: Striving for Compromise (Consensus?)

From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Date: Sat, 12 Jul 2014 07:47:44 +0000
To: Roberto Peon <grmocg@gmail.com>
cc: Amos Jeffries <squid3@treenet.co.nz>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <30158.1405151264@critter.freebsd.dk>
In message <CAP+FsNcuUm=hp4XF=MQX8c644vYkCRCeJsXpXYk2L=JXCoocOw@mail.gmail.com>
, Roberto Peon writes:

>> So let me ask you this:
>>  Why are you arguing so feircely against a proposal which sends up-front
>> the information to identify whether you need to make such a trust
>> decision in the first place?
>That information, about the size of the header, is not trustworthy in a DoS


That doesn't make *any* sense.

If the attacker sends a bogus frame length, it doesn't matter if
it is 16 bogus bits or 31 bogus bits he sends, it's going to
desynchronize the framing layer and that's the end of the story.

>If the information about the supposed size of the header is used to
>allocate buffers, it can create an asymmetric DoS scenario, where the
>attacker is spending much less resource than the server-side is spending.

Only a fool would allocate buffers based on what an untrusted peer
tells him.

Buffers will be allocated based on the max frame size the receiver
has decided, in our proposal he announces this with SETTINGS.

If a sender sends a length longer what SETTINGS told it, I bet you
that almost everybody is going to conclude "DoS!" and just close
the connection.   This is a major advantage of the SETTINGS:  the
client cannot do the "but I didn't know.." routine.

>The obvious work-around is to allocate buffers as they're consumed, which
>is essentially what would happen with header fragmentation.

No, this is not at all an obvious way to do it, because you move
the memory issue from size of allocation to number of allocations
and you force the high-pressure points to use non-optimal discontiguous
memory slices to store what in all legitimate traffic is contiguous

In other words that makes the attack worse:  The attacker now forces
to you use more memory since you have the overhead of linking the
buffers together, and the CPU overhead of having to respect that

>Another issue: Lets say that the server has an internal idea of the amount
>of memory it wishes to commit.
>It sees that the advertised header size is greater than it is willing to
>It rejects it immediately. It has now leaked information to the attackers
>about the state of the server.

How is this different from it telling outright in the settings ? :-)

How is the attacker not able to similarly probe the server using
multiple CONTINUATION frames ?

>And then there is the fact that we do have compression, and one can make
>promises till blue in the face about how much memory one is willing to
>accept compressed, and it has zero to do with the state commitment
>uncompressed if the server has to interpret the headers.

As I said earlier:  High-performance implementations are not going
to decompress headers until they are needed:  Why should a load-balancer
decompress User-Agent or Cookies if it doesn't care about them ?

>So, the way I look at it, it offers no practical benefit while placing
>restrictions on proxies, tunnels and non-interpreting gateways w.r.t.
>buffering which induce latency and which increase state commitment even
>when they don't interpret. The bad outweighs the good.

The way you look at it seems to have very little to do with how
high-traffic-pressure HTTP devices actually work and cope with DoS

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.
Received on Saturday, 12 July 2014 07:48:07 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:09 UTC