W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: HEADERS and flow control

From: David Krauss <potswa@gmail.com>
Date: Thu, 22 May 2014 19:03:19 +0800
Cc: Mark Nottingham <mnot@mnot.net>, Roberto Peon <grmocg@gmail.com>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>, Michael Sweet <msweet@apple.com>
Message-Id: <DD7137D4-1677-4C34-A242-9AF90A3F2034@gmail.com>
To: Martin Thomson <martin.thomson@gmail.com>

On 2014–05–22, at 6:49 PM, Martin Thomson <martin.thomson@gmail.com> wrote:

> On May 22, 2014 12:04 AM, "David Krauss" <potswa@gmail.com> wrote:
> > We can nip this all in the bud by limiting header block size. For header-heavy protocols, dividing the metadata stream into multiple blocks should be an essential feature in any case. Should an upper limit be set on header block size? How could it be specified?
> We have asked before about setting a hard limit. That doesn't work. There are too many existing uses and users that wouldn't be able to use HTTP/2 as a result.
Even a megabyte? Who sends that many HTTP/1.1 headers? Just divide the headers into separate blocks, how hard can it be?

A 1.1<->2 proxy can add divisions arbitrarily, without adding ambiguity. END_STREAM serves as a flag not to terminate the 1.1 headers.

> A declared maximum (i.e., settings) might work.

Not proxyable.
> You can't however reject a header block that you don't want. Not without also dumping the connection. Common state being what it is.
> The best approach is to stream headers. The HPACK design permits it, and that allows for a bounded state commitment. The only real cost is the head of line blocking.
Unbounded head of line blocking. If a reverse proxy decides to start streaming headers to an origin, but the client stops sending them, then the server becomes a zombie. The solutions are not to use stream multiplexing upstream of a reverse proxy, or buffer headers indefinitely at the proxy. Neither is palatable, and the latter really comes down to dropping the connection after a megabyte anyway.
> > Note, Huffman “compressed” binary data is 3.5x bigger on the wire, so 1 MiB decoded is up to 3.5 MiB transferred. I might be guessing high by many orders of magnitude, but I’m just being conservative on the side of the currently apparent design intent, which is to be unlimited.
> That is perhaps true if you do the naive thing, and only if you don't have any syntax constraints. The real cost depends, but if you do cookie syntax, it's closer to 1.6.
I was thinking in terms of a malicious user who wants to send as many bytes as possible under the lawful constraints, for the purpose of degrading the server-side network.

Received on Thursday, 22 May 2014 11:04:00 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC