Re: HPACK problems (was http/2 & hpack protocol review)

On 6 May 2014 11:52, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> wrote:
> And now the problem is what is the expected behavior.
> I think duplicates should be allowed because the algorithm in HPACK does not
> break if we allow them.
> I don't think the complexity of HPACK is much lowered if we get rid of
> duplicates.

Agreed: considering the HPACK spec in the abstract the simplest and
broadest change is to explicitly allow duplicates in the header set.
We'd need to decide how that affects the reference set (currently that
would add the same reference twice into the reference set, which is
again probably acceptable).

However, considering the real-world for a moment: disallowing
duplicates allows for hugely efficient operations to build the header
set by using proper set data structures (present in every useful
language's standard libraries and functions except C). In fact,
logically in pseudocode decoding becomes:

- Initialize empty header set
- Decode each header and add it to the header set and the reference
set (or remove it, as instructed)
- Emit the union of the header set and the reference set

These operations are fairly cheap and conceptually very clear.

Don't mistake that for me being hugely attached to the 'header set as
actual set' notion. I'm quite happy to go with nghttp2 on this,
especially as rewriting hyper is probably easier than rewriting
nghttp2. Just thought I'd present the case for the alternative.

Received on Tuesday, 6 May 2014 11:04:30 UTC