Re: HEADERS and flow control

On 2014–05–22, at 1:24 AM, Michael Sweet <msweet@apple.com> wrote:

> I'm not suggesting that implementations need to track multiple header sets, just the 2 as required today (one "up" and one "down").  All I am saying is that preventing intervening DATA frames is unnecessary for the compressor that has been chosen, and that already-established streams should not be held up for a set of HEADER frames.

Note that HEADER blocks are also allowed in the middle of a stream, between DATA frames. Although data could flow, the sender would still need to stall in-stream metadata. For a protocol tending toward large header blocks, this could have a snowball effect. But, I suppose the worst case is only the status quo.

We can nip this all in the bud by limiting header block size. For header-heavy protocols, dividing the metadata stream into multiple blocks should be an essential feature in any case. Should an upper limit be set on header block size? How could it be specified?

A good guideline could be that an intermediary should relay header blocks up to a megabyte (decoded, using the same accounting guidelines as for the reference set) and return RST_STREAM otherwise. The amount of metadata added by intermediaries should be capped, at maybe a half kilobyte. Categorizing implementations:

- Non-embedded clients tend to have megabytes to spare, so may be naive.
- Embedded clients are at the mercy of whatever they get. The server application is written specifically for the client, so as not to send an overload, but proxies should be generally friendly as well — they should not add many headers.
- Servers and proxies should be “serious” implementations. One megabyte per connection adds up to insane memory, but a low-memory condition may be handled by delaying processing of new header blocks.
- An origin server (sysadmin) may set the limit lower than a megabyte.
- A specific HEADER_OVERFLOW error code could be cached by a proxy and used to preemptively terminate possibly malicious subsequent attempts to the same origin.
- A proxy never needs to retain the header set, it only needs to peek at a few headers as they’re received and directly forward the encoded bytes. Counting the hypothetical decoded size isn’t much additional work, though.

Note, Huffman “compressed” binary data is 3.5x bigger on the wire, so 1 MiB decoded is up to 3.5 MiB transferred. I might be guessing high by many orders of magnitude, but I’m just being conservative on the side of the currently apparent design intent, which is to be unlimited.

Received on Thursday, 22 May 2014 04:00:40 UTC