- From: Mark Nottingham <mnot@mnot.net>
- Date: Sat, 3 Jan 2015 12:25:07 -0500
- To: Jyrki Alakuijala <jyrki@google.com>
- Cc: Dave Garrett <davemgarrett@gmail.com>, ietf-http-wg@w3.org
See: https://github.com/http2/compression-test This is what we used to help make the initial selection; I don’t believe that the compressor there has been updated to exactly match the spec; e.g., it doesn’t do huffman (Herve?). Cheers, > On 1 Jan 2015, at 10:35 am, Jyrki Alakuijala <jyrki@google.com> wrote: > > If the goal is to just make an algorithm that can work with a static entropy code with a static dictionary and no LZ77 outside the static dictionary, the current format (deflate) needs no changes. Deflate supports all these concepts. You only need a new encoder -- although zlib with setting a dictionary and running it with quality == 1 is a pretty close match already, only the static entropy coding is missing then. > > Was HPACK ever benchmarked against using deflate in such configuration? Would you accept help in setting up such an experiment? > > Note, that with a static dictionary I mean that we would generate a single deflate dynamic dictionary from a header corpus and always encode all data with that -- I don't refer to the static Huffman mode in deflate. > > > On Wed, Dec 31, 2014 at 7:52 PM, Dave Garrett <davemgarrett@gmail.com> wrote: > The goal of HPACK was never to produce ideal compression, just competent > compression not vulnerable to known issues. Some people do want to attempt to > use/create a far more efficient codec here, but it's now accepted to be outside of > the initial scope. What could be very well received is an HTTP/2 extension to > allow negotiation of alternate header compression methods. This would allow > actual experimentation in an effort to find the most ideal route(s). > > > Dave > -- Mark Nottingham http://www.mnot.net/
Received on Saturday, 3 January 2015 17:25:44 UTC