W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: #540: "jumbo" frames

From: Jason Greene <jason.greene@redhat.com>
Date: Wed, 25 Jun 2014 22:39:57 -0500
Cc: Matthew Kerwin <matthew@kerwin.net.au>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <7DFCFA75-3D0D-4797-85A5-689C80B8CD93@redhat.com>
To: Martin Thomson <martin.thomson@gmail.com>

On Jun 25, 2014, at 9:53 PM, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 25 June 2014 19:14, Jason Greene <jason.greene@redhat.com> wrote:
>> One of the biggest problems with CONTINUATION is that a server has no idea how huge the headers will be, and is forced to buffer them until a limit is hit. If this information was known up front it could either RST_STREAM, or simply discard all subsequent CONTINUATION frames and reply with a too large status.
> 
> This is a common thread here, but I haven't seen any way of limiting
> headers that is consistently meaningful.
> 
> The obvious answer is to limit the size of the frames that contain
> headers.  But in most cases that matter, the thing performing
> processing is going to maintain an uncompressed copy of the headers.
> 
> So for all but a few implementations, the size that really matters is
> the uncompressed size.  With Huffman coding, that's in the order of a
> 30% premium straight up, with significant variability.  Once you have
> delta coding, the uncompressed size of a block of headers isn't quite
> unbounded, but it is limited only by a multiplication factor relative
> to the header table size.  More simply put, if you have a 4k header
> table, you can have a magnification factor of up to 200000%.
> 
> Based on this, I concluded that it's foolish to try to limit header
> block size at the framing level.  When a state exhaustion attack can
> be mounted using only a handful of bytes, you are going to need
> protection at another layer anyway, protection at the framing layer is
> pretty redundant there.
> 
> That's not to say that the idea of prohibiting access to compression
> state in CONTINUATION is a terrible idea, I just don't know that it
> buys much.  Being able to multiplex CONTINUATIONS sounds attractive,
> but that is really just optimizing for a case we really shouldn't be
> encouraging at all.  Better in my mind to retain the cost and increase
> the incentives for not doing stupid things with header fields.

A very solid conclusion. Although I was definitely suggesting this as a possible extension of Matthew’s 1.5 (Mark’s earlier proposal). Although I admit the benefits are certainly minor so perhaps not worth the space. IMO the flow control possibility would be the biggest benefit. I actually think this discourages bad behavior, since bloated headers are throttled allowing good requests to still be processed.

--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat
Received on Thursday, 26 June 2014 03:40:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC