Re: Fragmentation for headers: why jumbo != continuation.

On 12 July 2014 15:24, Roberto Peon <grmocg@gmail.com> wrote:

> For a proxy, it is a sender and a receiver.
> Allowing fragmentation allows the sender-half of the proxy to reduce its
> memory commitment.
> Again, nothing changed on the receiver side, which implies that the
> proxy's memory commitment is reduced.
>

I don't see this sorry.

We have already accepted that a proxy must buffer up the entire header
before starting to send it, otherwise it allows a whole connection to be
easily blocked (this is the "just don't do it" point Martin has made many
times).

So once the proxy has the entire header in it's memory then it can
definitely send it with low buffering.  It may need to do two passes over
the header, one to calculate size and another the do the actual sending,
but it is certainly doable.      So it's just an implementation detail if
the proxy wishes to trade memory for latency.

If you really want streamable headers, then the only way to achieve that is
to allow them to be fragmented just as data frames are, so a proxy can
receive/decode/update/encode/send on the fly.    HPACK would obviously need
to be updated and fragmented headers would need to be able to be
interleaved.  I'm +1000000 for going this route!

cheers














-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.

Received on Saturday, 12 July 2014 06:34:19 UTC