W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2013

Header compresison: unbounded memory use, again

From: Gábor Molnár <gabor.molnar@sch.bme.hu>
Date: Sat, 07 Sep 2013 10:29:45 +0200
To: HTTP Working Group <ietf-http-wg@w3.org>
Message-id: <CA+KJw_7XKu8SqQYrPpQ8sf4kOTa_XuZs9MF5tNjh2U0fh_Zr4Q@mail.gmail.com>
The current header compression deals with even large header sets in a
memory efficient way, since it allows the compressor/decompressor to
operate in streaming mode. That's great, and I plan to expose a
streaming header API in node-http2 (besides the non-streaming API).
Although, I think there's at least one use-case when it's not good
enough yet, and I would like to hear the WG's opinion about it.

Let's suppose that there's a reverse proxy with one backend server (so
that it doesn't have to do routing, which is a separate problem). The
proxy doesn't want to limit the size of header sets but still wants to
limit its memory usage.

The solution seems to be simple: stream the headers from the client
connection to the backend connection so that it doesn't have to store
any headers for long time. So what should the proxy do when the client
sends a header set that spreads out in multiple frames?

1. If it waits for the end of the series, then its memory usage is
unbounded, so it cannot do that. Because of this, it forwards the
incoming headers on the backed connection immediately.

2. This works nicely if the client sends the continuation frames in a
timely manner. But let's suppose that it is slow to send those frames
(maybe intentionally). On the backend connection, the proxy cannot
send any frames except continuations belonging to this series until
the whole header set is over. Because of this, it will have to buffer
any incoming frames (from other clients) that are to be forwarded on
the backend connection. This leads to unbounded memory use again.

I'm not implementing a proxy so this is just a thought-experiment, and
handling arbitrary large header set in a proxy is maybe too
theoretical anyway. But as far as I see, these edge cases are also
important to handle in the spec. This seems to be a problem with the
connection-level compression design, and it's probably not worth doing
fundamental changes to support this edge case, but still, I would be
happy to hear your opinion on this issue.

Received on Saturday, 7 September 2013 08:30:34 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:15 UTC