Re: A rough analysis of the impact of headers on DoS

On 12/07/2014 4:21 a.m., Roberto Peon wrote:
> I've seen folks stating that HPACK/or its reference set are related to DoS
> or HoL blocking, but wanted to point out that these are orthogonal.


HOL blocking a connection while one client incrementally HPACKs streamed
headers is orthoginal to HOL blocking how?

> There are three modes of DoS attack using headers:
> 1) Stalling a connection by never finishing the sending of a full set of
> headers.
> 2) Resource exhaustion of CPU.
> 3) Resource exhaustion of memory.
> I don't find #1 interesting, since the attacker is mostly just attacking
> themselves
> I don't find #2 very interesting, since there are other (and far more
> effective) ways of attacking including sending a SETTINGS frame between
> every other DATA frame, sending 1-byte DATA frames, or by creating and
> tearing down the connection repeatedly, etc.

Okay. So its fine to stall your connections with a DDoS. What software
did you write h2 into BTW?

> I think #3 is the interesting attack vector.
> The current design handles this fairly well, at most one set of headers can
> be incomplete at any point in time

The current design (I assume you mean status quo / h2-13 ?) encourages
fragmentation and up to 400% bandwidth overheads on each fragment.

> (sending a large number of incomplete
> headers and keeping most of them incomplete most of the time is an
> excellent attack vector, which the design currently precludes).

This is only possible using the interleaved CONTINUATION proposal. And a
very good reason from removing CONTINUATION.

The large frames "Greg et al" proposal is not vulnerable to this.

> Sending headers via a large buffer or via fragments changes nothing about
> this particular attack vector-- the important part is to keep the max
> number of incomplete headers down to as small as possible (e.g. 1).

It changes the statefulness of the protocol. A pretty fundamental
property in HTTP.

Atomic frames are inherently stateless. Fragments are stateful by

The driver for large frames to to retain atomic and stateless frames.
CONTINUATION in all its forms is fragmented and stateful.

> In terms of memory exhaustion, knowing the size of the set of headers which
> is being received is marginally helpful and can be detrimental in terms of
> DoS.

Not knowing the size is even more detrimental in terms of DoS *and* HOL

> On the helpful side, if one realizes that the headers are going to be
> larger than one wishes to handle, knowing the length means knowing that the
> data should be processed and thrown out.

A key property for preventing the DoS.

> On the less-than-helpful side, knowing the size encourages allocating that
> much buffer, which an attacker can then exploit by not finishing the
> connection, and thus increasing the amount of memory which must sit idle.

Not knowing the size encourages buffer over-allocation to even greater
sizes - which the attacker can take advantage of.

In Squid I occasionaly see admin configuring multi-MB buffer limits in
order to receive Gbps traffic while also being able to cope with the
unknown message sizes. So long as CONTINUATION exists that is not able
to change.

> In other words, it is likely that the most robust solution is to allocate
> memory on demand.

Indeed. Which large frames encourages: advertise in advance what your
limit is, allocate buffer as small as the frame which is already
arriving, and PROTOCOL_ERROR anything above that.

Simple and efficient. Memory re-allocation on each TCP packet is
simpler, but far far less efficient.

> There is another interesting memory resource attack, which is the
> memory-expansion of compressed headers into uncompresed (i.e. zip-bomb)
> attack. This is orthogonal to other considerations and can be done so long
> as any compression at all is used, and so probably shouldn't factor into
> any decisions about framing.

Indeed this has nothing to do with the frame proposals. HPACK draft
should cater for that type of attack protection.


Received on Saturday, 12 July 2014 02:50:22 UTC