Re: Header compression: buffer management

In message <CABP7RbdL=cV2qSMBA3Me65T8tGaU3p9F5Wc690Jqk7q8xk_=iw@mail.gmail.com>
, James M Snell writes:

>I've briefly looked at this and it definitely is a challenge.  With delta,
>we at least have the benefit of allowing the decompressor to set an upper
>bound on stored state size,  but even that can be problematic under heavy
>load and does not completely resolve the issue.  For instance,  a malicious
>client could potentially send hundreds of junk headers frames intentionally
>designed to make the decompressor do significant extra work managing it's
>internal buffers.

Any request-response protocol is more or less vulnerable in this respect.

There are basically two things you can do to mitigate:

1) Severely circumscribe the first request on a connection.  This vastly
   increases the attackers cost of attack and makes sure that the server
   can scrutinize the first request for signs of malicious behaviour.

2) Make sure that the first request becomes available progressively
   to enable similar heuristics on the fly.

I think #1 is far preferable to #2, as a matter of usability for
the people writing the non-client code, but if done right #2 is a
workable solution.

One way to implement #1 is to specify that the first request, however
encoded, compressed or serialized, cannot be bigger than N bytes
on the wire.  For instance setting N to 8192.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Friday, 22 March 2013 07:38:51 UTC