W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: HPACK problems (was http/2 & hpack protocol review)

From: Cory Benfield <cory@lukasa.co.uk>
Date: Tue, 6 May 2014 11:32:54 +0100
Message-ID: <CAH_hAJHXqC256K0-rPt6TyS8zNftjajEFoPWdo-Yn18OhHqY3w@mail.gmail.com>
To: Daniel Stenberg <daniel@haxx.se>
Cc: James M Snell <jasnell@gmail.com>, "K.Morgan@iaea.org" <K.Morgan@iaea.org>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>, "C.Brunhuber@iaea.org" <C.Brunhuber@iaea.org>
On 6 May 2014 11:00, Daniel Stenberg <daniel@haxx.se> wrote:
> Perhaps you can suggest an updated wording for the spec that makes it
> clearer?
>
> I (and others) have used nghttp2 quite a lot for interop lately so I want to
> be sure that what it does is what the spec says so that we don't interop
> against something not actually compliant!

I use nghttp2 as well, which is how I found this ambiguity. I
deliberately didn't suggest any new wording because I'm not actually
sure what the correct logic should be! From a personal perspective I'd
be tempted to mandate that header sets and reference sets not have
duplicate elements in them. However, to make that work while still
emitting the same header more than once (technically allowed but
totally insane) you'd need to mandate joining those duplicate header
values together with null bytes. It also has slightly worse
compression efficiency, I suspect, though I've not sat down and
profiled it.

> Any more than other compression or encryption interop bugs (I'm just
> suggesting that the area of compression/checksumming/encryption makes
> annoying interop debugging)? If so, what is it that makes HPACK more
> difficult than others?

I've not written a complete compression implementation from scratch
for any other form of compression, so I'm not well-placed to compare.
However, the reason I've been able to do that is because all other
compression algorithms used in HTTP are mature standards, allowing me
to use someone else's implementation (and to assume that such an
implementation will actually be present by default). This eliminates a
whole class of interop bugs because I can delegate to a mature
codebase that almost certainly does things right (or at least less
wrong).

It might be that in actuality HPACK is as simple or simpler than other
compression algorithms to implement, but to me it feels incredibly
fiddly: there's quite a few moving pieces and inputs that come from
multiple places, so I need to keep track of what's doing what. I also
have to perform some calculations that look insane because the spec
mandates them (those  32 octets!).
Received on Tuesday, 6 May 2014 10:33:22 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC