W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Header compression: buffer management

From: Roberto Peon <grmocg@gmail.com>
Date: Thu, 21 Mar 2013 17:06:05 -0400
Message-ID: <CAP+FsNdVz3xU=ADgUTOkDAB9gGAFSRQan5wyoZ_kGgunoLjLwA@mail.gmail.com>
To: James M Snell <jasnell@gmail.com>
Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, RUELLAN Herve <Herve.Ruellan@crf.canon.fr>, HTTP Working Group <ietf-http-wg@w3.org>
As others have pointed out, one cannot simply trust the encoder to do the
right thing.
As a result, either some eviction policy needs to be used (e.g. LRU), or
agood, reliable, detection mechanism must be used which doesn't have
false-positives and which only disconnects malicious users.

I have doubts that one can correctly identify malicious endpoints, and so I
chose the eviction policy based route.

The choice of eviction policy ends up having a large effect on the amount
of overhead necessary for the compressor/decompressor. In the case of LRU,
it is feasible to write all bytes into a single buffer, trading off space
for speed.

 An LRU doesn't *require* this tradeoff, however-- one could trade space
for CPU and use a map or hashmap for entries, reducing lookup time from
O(n) to O(lg(n)) or O(1) (avg case).

LRU is also one of the simplest eviction policies possible...

Also... it is theoretically possible to manage entries in the buffer with
delta that *don't* appear in the currrent headers. I've just not written an
encoder which does so.You'd do this by inserting the key-value and
immediately declare that it isn't part of the current headers...
-=R


On Thu, Mar 21, 2013 at 3:45 PM, James M Snell <jasnell@gmail.com> wrote:

> I've briefly looked at this and it definitely is a challenge.  With
> delta,  we at least have the benefit of allowing the decompressor to set an
> upper bound on stored state size,  but even that can be problematic under
> heavy load and does not completely resolve the issue.  For instance,  a
> malicious client could potentially send hundreds of junk headers frames
> intentionally designed to make the decompressor do significant extra work
> managing it's internal buffers.  If the intermediary blindly passes such
> requests through,  it will likely end up double buffering the junk data
> causing even more issues.  It is obvious that fairly aggressive defensive
> techniques are going to be required to watch for bad behavior and
> compensate. On the plus side,  a delta decompressor could simply choose to
> throw up its hands and give up doing any buffer management, simply passing
> values through...  Which,  of course just passes the problem on to someone
> else.
> On Mar 21, 2013 9:51 AM, "Poul-Henning Kamp" <phk@phk.freebsd.dk> wrote:
>
>> In message <6C71876BDCCD01488E70A2399529D5E5163F39C4@ADELE.crf.canon.fr>,
>> RUELL
>> AN Herve writes:
>>
>> >In HeaderDiff we chose to let the encoder decide how the buffer is
>> managed
>> >(regarding additions and removals). These decisions are encoded on the
>> wire
>> > and applied by the decoder on its own buffer. We think this choice has
>> > several advantages.
>>
>> Has this been analysed from a denial-of-service perspective ?
>>
>> Anything in the protocol where the client can cause memory allocation
>> on the server/proxy/whatever, should be scrutinized in a DoS perspective.
>>
>> --
>> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
>> phk@FreeBSD.ORG         | TCP/IP since RFC 956
>> FreeBSD committer       | BSD since 4.3-tahoe
>> Never attribute to malice what can adequately be explained by
>> incompetence.
>>
>>
Received on Thursday, 21 March 2013 21:06:34 GMT

This archive was generated by hypermail 2.3.1 : Thursday, 21 March 2013 21:06:41 GMT