W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: HPACK problems (was http/2 & hpack protocol review)

From: Greg Wilkins <gregw@intalio.com>
Date: Wed, 7 May 2014 16:05:31 +0200
Message-ID: <CAH_y2NH=VLG5VzvoOJ3K=d=p7wnxQ2reMD5iEQ8oqS9XM2qG6A@mail.gmail.com>
To: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
I'm coming back to the HTTP/2 WG very late in the process, so firstly
apologies for not giving feedback earlier in the process.

The jetty team is just starting our HTTP/2 work and we too have big
reservations about hpack from both a complexity and poor layering point of
view.

In a multiplexed protocol like HTTP/2  individual streams are going to be
handed off to threads to be processed, potentially out of order.   However
HPACK enforces an ordering on the decoding of header frames.   Thus it
means that you cannot design your HTTP/2 implement with pure framing layer
with protocol handling layer on top, because header frames must be
processed in the order they were received - thus the header decoding has to
be performed by the thread that is doing the framing decoding.

Not only does this result in poor separation of concerns, but it does
significantly slow down a server due to parallel slowdown.    The CPU core
that decodes a frames contents is the best CPU core to process it as it has
its caches warmed up with all the data!  But with HTTP/2 this is extremely
difficult to achieve as we need to be able to handle streams out of order.
Thus we have a thread dispatch between the decoding of the header frames
and the handling of those headers - meaning there is a high probability
that another CPU core will handle the frame and be slowed down with cache
misses.  This is a real problem (see
https://webtide.intalio.com/2012/12/avoiding-parallel-slowdown-in-jetty-9/).

More over, the requirement to decode headers in order detracts
significantly from the concept of stream priority, as low priority frames
must be decoded (and the results put somewhere) before any subsequent high
priority frames.

It also requires that intermediaries always decode and re encode header
frames if they are aggregating streams.

There are also numerous strategy decisions that must be made by a HPACK
encoding, such as replacing or using the current set of headers, H code or
not etc.    These types of decisions are difficult to make and almost
surely will be made wrongly in many situations.

There is also some element double complexity in a system that strives to
save every bit for field, but then has an entire other layer that avoids
the need to send the field in the first place.

What I would like to see is a much simpler header compression algorithm
applied to HEADER/CONTINUATION frames that is not order dependent.

However, I would also support that header frames could be sent on stream 0
and that headers so received would be considered part of all streams.
Such headers would probably need to be versioned, so that out of order
processing can occur, but I think that would be a lot less complex than the
current proposal.   But if it also proves to be complex, then just simple
header compression would be OK.


regards





















On 6 May 2014 11:14, Cory Benfield <cory@lukasa.co.uk> wrote:

> On 5 May 2014 16:26, James M Snell <jasnell@gmail.com> wrote:
> > FWIW, I am strongly -1 on the use of hpack in http/2. I believe that
> > there are (and have proposed/argued for) much less complicated
> > alternatives.
> >
> > - James
>
> I agree with James and Keith's anonymous reviewer's doubts about
> HPACK. Hyper's HPACK code has been a source of pain since I first
> wrote it. The spec varies wildly between mind-bogglingly specific
> (using 32 octets per-header-table entry as overhead because that's the
> size of two 64-bit pointers and two 64-bit integers) and
> under-specified.
>
> As an example of the under-specification, consider that the reference
> set and header sets are defined as unordered but do not say whether
> they may contain duplicate elements. My assumption was that they could
> not and so I could assume all implementations will join multiple
> values with null bytes, but that assumption has not been made
> elsewhere (nghttp2 certainly doesn't). Essentially, the word 'set' is
> being used here without clarity about what exactly is meant. Is a
> 'set' simply an unordered collection? Or is it subject to stronger
> constraints (more in line with the computer-science definition of a
> set)? I guarantee that I won't be the first person to read the word
> 'set' and jump to my chosen language's set data structure.
>
> These interop bugs with HPACK are ridiculously difficult to catch. I
> didn't find any during live testing or in use, I only found some when
> I used the excellent hpack-test-case[1] project to write 500-odd
> integration tests. Even this didn't catch everything: my interop
> problems with nghttp2 were only found when I submitted hyper's output
> to the project, fully four months after I introduced the bug.
>
> Finally, I'm utterly unconvinced that HPACK solves the problem it was
> intended to solve: compression-based attacks on TLS-encrypted
> sessions. I am not a cryptographer, but I'm quite prepared to believe
> that we'll see successful attacks on HPACK in the future.
>
> It could be that I'm simply less competent than everyone else on this
> list and no-one else has had the trouble with HPACK that I've had. But
> I'm also not a total moron, and the pain I had with HPACK suggests
> it's a potential pain-point for a lot of others as well.
>
> -- Cory
>
> [1] https://github.com/http2jp/hpack-test-case
>
>


-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Wednesday, 7 May 2014 14:06:03 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC