Re: Fragmentation for headers: why jumbo != continuation.

Isn't fragmenting solved by eliminating the reference set? IIUC, removing the reference set allows the headers to be fragmented (with 1*HEADERS) and multiplexed (as long as the HEADERS frames are processed in order).

-Keith


> On Jul 10, 2014, at 22:30, "grmocg@gmail.com" <grmocg@gmail.com> wrote:
>
> There are two separate reasons to fragment headers
>
> 1) Dealing with headers of size > X when the max frame-size is <= X.
> 2) Reducing buffer consumption and latency.
>
> Most of the discussion thus far has focused on #1.
> I'm going to ignore it, as those discussions are occurring elsewhere, and in quite some depth :)
>
>
> I wanted to be sure we were also thinking about #2.
>
> Without the ability to fragment headers on the wire, one must know the size of the entire set of headers before any of it may be transmitted.
>
> This implies that one must encode the entire set of headers before sending if one will ever do transformation of the headers. Encoding the headers in a different HPACK context would count as a transformation, even if none of the headers were modified.
>
> This means that the protocol, if it did not have the ability to fragment, would require increased buffering and increased latency for any proxy by design.
>
> This is not currently true for HTTP/1-- the headers can be sent/received in a streaming fashion, and implementations may, at their option, choose to buffer in order to simplify code.
>
> -=R
This email message is intended only for the use of the named recipient. Information contained in this email message and its attachments may be privileged, confidential and protected from disclosure. If you are not the intended recipient, please do not read, copy, use or disclose this communication to others. Also please notify the sender by replying to this message and then delete it from your system.

Received on Thursday, 10 July 2014 20:51:34 UTC