W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2013

Re: Header compresison: unbounded memory use, again

From: Roberto Peon <grmocg@gmail.com>
Date: Sat, 7 Sep 2013 02:11:32 -0700
Message-ID: <CAP+FsNdTJZkgWdKdR-gXqLE2aAL6k6MC0xDhrUKOxv-_Q0yoDQ@mail.gmail.com>
To: Gábor Molnár <gabor.molnar@sch.bme.hu>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
One of the memory exhaustion attacks you're referring to is present even in
HTTP/1.0 and HTTP/1.1 (a slow remote endpoint which intentionally, and
potentially maliciously sends request or response headers slowly or
potentially even a body slowly). We just have to build mechanisms into our
servers, proxies, and clients to deal with these attack vectors, just like
we must deal with SYN flood and other protections. Such is life :/

Timeouts coupled with maximum state size per
client/connection/request/response are a pretty common way of dealing with
this.

-=R


On Sat, Sep 7, 2013 at 1:29 AM, Gábor Molnár <gabor.molnar@sch.bme.hu>wrote:

> The current header compression deals with even large header sets in a
> memory efficient way, since it allows the compressor/decompressor to
> operate in streaming mode. That's great, and I plan to expose a
> streaming header API in node-http2 (besides the non-streaming API).
> Although, I think there's at least one use-case when it's not good
> enough yet, and I would like to hear the WG's opinion about it.
>
> Let's suppose that there's a reverse proxy with one backend server (so
> that it doesn't have to do routing, which is a separate problem). The
> proxy doesn't want to limit the size of header sets but still wants to
> limit its memory usage.
>
> The solution seems to be simple: stream the headers from the client
> connection to the backend connection so that it doesn't have to store
> any headers for long time. So what should the proxy do when the client
> sends a header set that spreads out in multiple frames?
>
> 1. If it waits for the end of the series, then its memory usage is
> unbounded, so it cannot do that. Because of this, it forwards the
> incoming headers on the backed connection immediately.
>
> 2. This works nicely if the client sends the continuation frames in a
> timely manner. But let's suppose that it is slow to send those frames
> (maybe intentionally). On the backend connection, the proxy cannot
> send any frames except continuations belonging to this series until
> the whole header set is over. Because of this, it will have to buffer
> any incoming frames (from other clients) that are to be forwarded on
> the backend connection. This leads to unbounded memory use again.
>
> I'm not implementing a proxy so this is just a thought-experiment, and
> handling arbitrary large header set in a proxy is maybe too
> theoretical anyway. But as far as I see, these edge cases are also
> important to handle in the spec. This seems to be a problem with the
> connection-level compression design, and it's probably not worth doing
> fundamental changes to support this edge case, but still, I would be
> happy to hear your opinion on this issue.
>
>  Gábor
>
>
Received on Saturday, 7 September 2013 09:11:59 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:15 UTC