W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Header compression: buffer management

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Fri, 22 Mar 2013 04:11:44 +1300
Message-ID: <514B2330.2030807@treenet.co.nz>
To: ietf-http-wg@w3.org
On 22/03/2013 1:50 a.m., RUELLAN Herve wrote:
> In HeaderDiff we chose to let the encoder decide how the buffer is managed (regarding additions and removals). These decisions are encoded on the wire and applied by the decoder on its own buffer. We think this choice has several advantages.

The encoder *will* at some point be a malicious attacker out to cause 
the decoder problems. Placing such an encoder in charge of buffer 
management at the decoder end of the wire is a sure-fire recipe for 
trouble. For anything like correct operation under such conditions it 
requires a pre-known buffer limit and enforcement of that limit at the 
decoder. Possibly with one size in the specs and a larger buffer size 
offered by the decoder for ongoing traffic (yes I can hear the cries 
about RTT lag already).


> First, this allows to have a very simple and lightweight decoder: it only needs to decode the decisions made by the encoder and apply them. It has no need to effectively implements a LRU mechanism for its buffer. This is especially important for small devices with limited CPU.
> In addition, we think it could be of interest for an intermediary, that could keep a partial buffer containing only the entries it is interested with, ignoring the other entries.

I notice that you are not making the same claim for encoder. All HTTP 
nodes will need both algorithms to follow the request+response model. 
How does the encoder+decoder as a pair stack up for size and complexity?


> Second, this allows to adapt the buffer management to the context. While LRU is a good algorithm in the general case for deciding which entry to remove from the buffer, it may not be the best one for every specific case. More complex algorithms can probably be devised to improve the compaction (for example taking also into account the frequency of occurrence of headers), or simpler algorithms could be used to reduce the CPU cost to the detriment of compaction (this is important for small devices).
> Adaptability is also important for the future of HTTP/2.0. If we do our job correctly, HTTP/2.0 will still be used in 10 or 20 years, but we have no idea how it will be used at that time. Making HTTP/2.0 adaptable to new usages is crucial to give it a long life expectancy.
> For these reasons, we prefer to keep all the buffer management on the encoder side, allowing an implementer to choose its preferred approach.

Good reasons.

Amos
Received on Thursday, 21 March 2013 15:12:16 GMT

This archive was generated by hypermail 2.3.1 : Thursday, 21 March 2013 15:12:17 GMT