Fragmentation for headers: why jumbo != continuation.

There are two separate reasons to fragment headers

1) Dealing with headers of size > X when the max frame-size is <= X.
2) Reducing buffer consumption and latency.

Most of the discussion thus far has focused on #1.
I'm going to ignore it, as those discussions are occurring elsewhere, and
in quite some depth :)


I wanted to be sure we were also thinking about #2.

Without the ability to fragment headers on the wire, one must know the size
of the entire set of headers before any of it may be transmitted.

This implies that one must encode the entire set of headers before sending
if one will ever do transformation of the headers. Encoding the headers in
a different HPACK context would count as a transformation, even if none of
the headers were modified.

This means that the protocol, if it did not have the ability to fragment,
would require increased buffering and increased latency for any proxy by
design.

This is not currently true for HTTP/1-- the headers can be sent/received in
a streaming fashion, and implementations may, at their option, choose to
buffer in order to simplify code.

-=R

Received on Thursday, 10 July 2014 20:27:31 UTC