W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

RE: Header compression: buffer management

From: RUELLAN Herve <Herve.Ruellan@crf.canon.fr>
Date: Fri, 22 Mar 2013 13:52:35 +0000
To: Amos Jeffries <squid3@treenet.co.nz>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <6C71876BDCCD01488E70A2399529D5E5163F3CD1@ADELE.crf.canon.fr>

> -----Original Message-----
> From: Amos Jeffries [mailto:squid3@treenet.co.nz]
> Sent: jeudi 21 mars 2013 16:12
> To: ietf-http-wg@w3.org
> Subject: Re: Header compression: buffer management
> On 22/03/2013 1:50 a.m., RUELLAN Herve wrote:
> > In HeaderDiff we chose to let the encoder decide how the buffer is
> managed (regarding additions and removals). These decisions are encoded
> on the wire and applied by the decoder on its own buffer. We think this
> choice has several advantages.
> The encoder *will* at some point be a malicious attacker out to cause the
> decoder problems. Placing such an encoder in charge of buffer management
> at the decoder end of the wire is a sure-fire recipe for trouble. For anything
> like correct operation under such conditions it requires a pre-known buffer
> limit and enforcement of that limit at the decoder. Possibly with one size in
> the specs and a larger buffer size offered by the decoder for ongoing traffic
> (yes I can hear the cries about RTT lag already).

True, a malicious encoder could try to cause the decoder problems. But I think this is true for any encoder. Even HTTP/1.1 header format caused some vulnerabilities due to some whitespace handling.

Here, the decoder has only to check that the total buffer size is kept below the negotiated limit.

> > First, this allows to have a very simple and lightweight decoder: it only
> needs to decode the decisions made by the encoder and apply them. It has
> no need to effectively implements a LRU mechanism for its buffer. This is
> especially important for small devices with limited CPU.
> > In addition, we think it could be of interest for an intermediary, that could
> keep a partial buffer containing only the entries it is interested with, ignoring
> the other entries.
> I notice that you are not making the same claim for encoder. All HTTP nodes
> will need both algorithms to follow the request+response model.
> How does the encoder+decoder as a pair stack up for size and complexity?

The case of the encoder is handled below. I started with the decoder because all HTTP nodes must implement a decoder able to handle any message sent to it. So the decoder will be roughly the same on a high-end workstation and on a low-power device.
Transmitting the buffer management information in the stream allows to have different implementations of the encoder, all compatible with any decoder. Therefore, an implementer can choose which kind of implementation he wants depending on its constraints (CPU, compaction ratio...). 

> > Second, this allows to adapt the buffer management to the context. While
> LRU is a good algorithm in the general case for deciding which entry to
> remove from the buffer, it may not be the best one for every specific case.
> More complex algorithms can probably be devised to improve the
> compaction (for example taking also into account the frequency of
> occurrence of headers), or simpler algorithms could be used to reduce the
> CPU cost to the detriment of compaction (this is important for small devices).
> > Adaptability is also important for the future of HTTP/2.0. If we do our job
> correctly, HTTP/2.0 will still be used in 10 or 20 years, but we have no idea
> how it will be used at that time. Making HTTP/2.0 adaptable to new usages is
> crucial to give it a long life expectancy.
> > For these reasons, we prefer to keep all the buffer management on the
> encoder side, allowing an implementer to choose its preferred approach.
> Good reasons.
> Amos

Received on Friday, 22 March 2013 13:54:30 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:10 UTC