Re: HPACK problems (was http/2 & hpack protocol review)

On 7 May 2014 07:05, Greg Wilkins <gregw@intalio.com> wrote:
> In a multiplexed protocol like HTTP/2  individual streams are going to be
> handed off to threads to be processed, potentially out of order.

Note that it is possible, though not easy to construct a system that
can perform the bulk (or maybe even all) of HPACK processing on
different threads.  I've been involved in a couple of offline
discussions about this.  To do this, you can flush the reference set
on each header block and then rely on having certain guarantees
regarding the header table (i.e., strict controls on modifications).
A worker thread that updates the header table can do so within certain
constraints, though this may result in a need to perform fixups for
references to the static table, depending the final order that header
fields are ultimately serialized.

The shared context here is obviously more problematic for decoding,
since you have no guarantees from your peer about what they are
willing to do.

Ultimately, this trades compression efficiency for other things.  In
this case, some of the compression efficiency is gained by relying on
having strict serialization.  I don't believe it to be possible to
have a simpler compression algorithm that achieves this level of
compression efficiency without an ordering requirement.  That would
also need to be sensitive to the security concerns we've been
skirting.  I am happy to be proven wrong in this regard, of course.

It might be that good performance will require at least one HTTP/2
connection per core (or thread) on your machine, such that you are
unable to completely saturate a single connection.  I think that the
operating assumption is that machines with the sorts of performance
requirements you are talking about here will be operating on more
connections than that.

--Martin

Received on Wednesday, 7 May 2014 18:06:06 UTC