- From: Willy Tarreau <w@1wt.eu>
- Date: Mon, 5 Jan 2015 08:27:30 +0100
- To: Jyrki Alakuijala <jyrki@google.com>
- Cc: Roberto Peon <grmocg@gmail.com>, Mark Nottingham <mnot@mnot.net>, Dave Garrett <davemgarrett@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Mon, Jan 05, 2015 at 02:20:08AM +0100, Jyrki Alakuijala wrote: > On Sat, Jan 3, 2015 at 6:46 PM, Roberto Peon <grmocg@gmail.com> wrote: > > > The intent was to make a compressor that was difficult to get wrong from a > > security perspective, whose implementation was reasonably easy for good > > programmers, and which did good-enough compression. > > > > A safe (in the sense of CRIME) implementation of deflate can be done in > less than 1000 lines of code, possibly 500 lines. It is much easier to > write than a full zlib implementation. Most likely also easier than HPACK > encoder + decoder. I think you forgot two important points : zlib is generic and most efficient when compressing large data sets because it was invented to compress files. Here we're achieving very good compression ratios on very small sets (a few tens of bytes to a few hundreds of bytes of input data). In all cases, a dedicated compressor will be better than a generic compressor when you know the specific characteristics of your data set (which is the case here). Also, deflate is slow and not efficiently implemented in software. It uses bit- aligned data and pattern search which are really inefficient. And storing the deflate context uses a huge amount of memory. What this would end up with is people trying to reimplement their own version of the algorithm to try to save whatever resource they can, leading to significantly more security issues. HPACK is simple to implement, simple to understand, byte-aligned and specific to a single purpose. And even if it were less efficient than any generic algorithm you would propose, it would always be possible to write a more efficient one dedicated to this task. Willy
Received on Monday, 5 January 2015 07:28:00 UTC