Re: #540: "jumbo" frames

On 25 June 2014 19:14, Jason Greene <jason.greene@redhat.com> wrote:
> One of the biggest problems with CONTINUATION is that a server has no idea how huge the headers will be, and is forced to buffer them until a limit is hit. If this information was known up front it could either RST_STREAM, or simply discard all subsequent CONTINUATION frames and reply with a too large status.

This is a common thread here, but I haven't seen any way of limiting
headers that is consistently meaningful.

The obvious answer is to limit the size of the frames that contain
headers.  But in most cases that matter, the thing performing
processing is going to maintain an uncompressed copy of the headers.

So for all but a few implementations, the size that really matters is
the uncompressed size.  With Huffman coding, that's in the order of a
30% premium straight up, with significant variability.  Once you have
delta coding, the uncompressed size of a block of headers isn't quite
unbounded, but it is limited only by a multiplication factor relative
to the header table size.  More simply put, if you have a 4k header
table, you can have a magnification factor of up to 200000%.

Based on this, I concluded that it's foolish to try to limit header
block size at the framing level.  When a state exhaustion attack can
be mounted using only a handful of bytes, you are going to need
protection at another layer anyway, protection at the framing layer is
pretty redundant there.

That's not to say that the idea of prohibiting access to compression
state in CONTINUATION is a terrible idea, I just don't know that it
buys much.  Being able to multiplex CONTINUATIONS sounds attractive,
but that is really just optimizing for a case we really shouldn't be
encouraging at all.  Better in my mind to retain the cost and increase
the incentives for not doing stupid things with header fields.

Received on Thursday, 26 June 2014 02:54:23 UTC