HTTP/2 feedback/questions

Hello,

I've been working on HTTP/2 implementation in NGINX for the past few months.
As a server-side developer I'd like to share some feedback about the protocol
as well as take a chance to ask some question regarding the specification.

Here they are:

 1. The whole prioritization mechanism looks over-engineered.  The essential
    thing it was designed to do is to determine what request need to be served
    first.

    At the server side we have a few requests and a connection pipe, and all
    we need is the order in which we should send response data.

    Instead we must maintain a dependency tree and support a bunch of
    complicated operations over it, and the tree doesn't have any explicit
    limitation.

    Seems like the KISS principle was forgotten.

 2. A quote from the RFC:

   "an endpoint SHOULD retain stream prioritization state for a period after
    streams become closed.  The longer state is retained, the lower the chance
    that streams are assigned incorrect or default priority values."

    How long this period should be?

    At first, a race condition was introduced, and then such a weak solution
    was given.  It's another signal that the prioritization mechanism isn't in
    a good shape.  Btw, there is no such problem in SPDY.

 3. Another quotes:

   "a relative weight, a number that is used to determine the relative
    proportion of available resources that are assigned to streams dependent
    on the same stream."

   "Streams with the same parent SHOULD be allocated resources proportionally
    based on their weight."

   "Thus, if stream B depends on stream A with weight 4, stream C depends on
    stream A with weight 12, and no progress can be made on stream A, stream B
    ideally receives one-third of the resources allocated to stream C."

    ...there are also other mentions about "resource allocation" and
    "available resources".

    What does it mean to allocate resources proportionally?  What kind of
    resources?

    The solution I've eventually come to is dividing the number of bytes
    already sent in a response by these weights to determine the order of
    following frames with the same dependency.
    
 4. The compression ratio of Huffman encoding in HPACK varies between 1.6
   (in the best case) and ~0.26..6 (in the worst case), and the spec doesn't
    have requirement to avoid Huffman encoding when the ratio is 1 or lower.

    Also there's no number of uncompressed bytes stored, so to determine
    the real length of field we need to decompress it first.  And we cannot
    even have a good optimistic guess, because the ratio varies so much.
    
    This creates a bunch of problems.  The most annoying of them is when
    we actually reject a request due to some limitation and must use
    as lower resources as possible, but we still need to decompress the
    HPACK header block because of stateful nature of compression.

 5. What's the purpose of "Dynamic Table Size Update"?  Why does this thing
    exist?  In what cases it should be used?

 6. Why the header that doesn't fit into dynamic table can be transfered
    with indexing?  What's the purpose of eviction of all fields of dynamic
    table?  

Thanks.

  wbr, Valentin V. Bartenev

Received on Friday, 24 July 2015 17:36:09 UTC