W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: comments about draft-ietf-httpbis-header-compression

From: Roberto Peon <grmocg@gmail.com>
Date: Sat, 3 Jan 2015 09:46:38 -0800
Message-ID: <CAP+FsNeoRDwU2fRCE1HKEDtGW9ZFV+rrWRPHOsZ3iw3gfSmTEw@mail.gmail.com>
To: Mark Nottingham <mnot@mnot.net>
Cc: Jyrki Alakuijala <jyrki@google.com>, Dave Garrett <davemgarrett@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
And by 'easy' I mean likely to interoperate.

On Sat, Jan 3, 2015 at 9:46 AM, Roberto Peon <grmocg@gmail.com> wrote:

> The intent was to make a compressor that was difficult to get wrong from a
> security perspective, whose implementation was reasonably easy for good
> programmers, and which did good-enough compression.
>
> Your statement about zlib being 'as safe' misses the mark. zlib has more
> capabilities, which include things which are known to be unsafe. More
> capabilities most usually means less safe. Adding bits in the manner you
> suggested doesn't work-- it requires the attacker to do more requests to
> determine if what it did was right and this is linear, not exponential.
> Even if that wasn't true, you're adding bits (and a fair number of them),
> which defeats the purpose of compression.
>
> Non-static entropy coding also leads to non-exponential searches of the
> input space if the attacker is allowed to influence the entropy coding.
> That is why HPACK doesn't do non-static entropy coding. It uses a canonical
> huffman format so that it would be possible to do "non static", though I
> envisioned that this would only happen before request bits were sent, e.g.
> in the ALPN token.
>
> HPACK offers a means of not doing entropy coding, so if it gets out of
> date, either the dictionary gets rev'd (e.g. at startup as described
> above), or one chooses to not use it. This is described in section 5.2.
>
> -=R
>
> On Sat, Jan 3, 2015 at 9:25 AM, Mark Nottingham <mnot@mnot.net> wrote:
>
>> See:
>>   https://github.com/http2/compression-test
>>
>> This is what we used to help make the initial selection; I don’t believe
>> that the compressor there has been updated to exactly match the spec; e.g.,
>> it doesn’t do huffman (Herve?).
>>
>> Cheers,
>>
>>
>> > On 1 Jan 2015, at 10:35 am, Jyrki Alakuijala <jyrki@google.com> wrote:
>> >
>> > If the goal is to just make an algorithm that can work with a static
>> entropy code with a static dictionary and no LZ77 outside the static
>> dictionary, the current format (deflate) needs no changes. Deflate supports
>> all these concepts. You only need a new encoder -- although zlib with
>> setting a dictionary and running it with quality == 1 is a pretty close
>> match already, only the static entropy coding is missing then.
>> >
>> > Was HPACK ever benchmarked against using deflate in such configuration?
>> Would you accept help in setting up such an experiment?
>> >
>> > Note, that with a static dictionary I mean that we would generate a
>> single deflate dynamic dictionary from a header corpus and always encode
>> all data with that -- I don't refer to the static Huffman mode in deflate.
>> >
>> >
>> > On Wed, Dec 31, 2014 at 7:52 PM, Dave Garrett <davemgarrett@gmail.com>
>> wrote:
>> > The goal of HPACK was never to produce ideal compression, just competent
>> > compression not vulnerable to known issues. Some people do want to
>> attempt to
>> > use/create a far more efficient codec here, but it's now accepted to be
>> outside of
>> > the initial scope. What could be very well received is an HTTP/2
>> extension to
>> > allow negotiation of alternate header compression methods. This would
>> allow
>> > actual experimentation in an effort to find the most ideal route(s).
>> >
>> >
>> > Dave
>> >
>>
>> --
>> Mark Nottingham   http://www.mnot.net/
>>
>>
>>
>>
>>
>
Received on Saturday, 3 January 2015 17:47:05 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:42 UTC