RE: Interleaving #481 (was Re: Limiting header block size)

In particular, we have cases where the headers exceeded 16KB, and we had to expand our maximum to support them.  Kerberos tickets can be insanely large.

We now cap at 64KB, and haven't yet encountered a use case which required exceeding that.  While you may be comfortable with an 8KB max through your implementation, you'd be breaking real-world scenarios to write that into the protocol.

-----Original Message-----
From: Martin Thomson [mailto:martin.thomson@gmail.com] 
Sent: Tuesday, June 3, 2014 1:49 PM
To: Greg Wilkins
Cc: Simone Bordet; Michael Sweet; HTTP Working Group
Subject: Re: Interleaving #481 (was Re: Limiting header block size)

On 3 June 2014 12:43, Greg Wilkins <gregw@intalio.com> wrote:
> A received header block may contain either literal headers or indexes 
> into the header table.  If the header block is all huffman, than the 
> max size is about twice the frame size.  If the header block is all 
> indexes, then there is no additional memory commitment at all other 
> than the strings already in the header table and the points to them in 
> the buffer that the reader has already committed to read.

Not really.  Since referenced entries can be evicted after they are emitted, the actual size can be much larger (though perhaps with careful reference counting, only a smidgeon over 32K + header table size).

> I'm glad that you agree with the fundamental idea that limits should not be
> hidden and undiscovereable.    I also agree that hop by hop negotiation is
> complex and of little use - because it just tells an application that 
> it can't work.  Instead it should be the other way around that the 
> transport should tell the application what it should do to always work 
> over a compliant http2 connection.

That is quite hard because you are potentially talking to an HTTP/1.1 hop, unaware of the limits imposed by other hops.

> I get it that it is very frustrating when people like me come along 
> and try to over turn decisions that have been put to consensus before.

Sometimes.  But only occasionally, when the tone is off, and that sort of thing.

> But I would also point out that the proposal I made above does support 
> arbitrary sized headers.

Not quite.  Arbitrary sized blocks, but not individual fields.  Long URLs and cookies are a part of the landscape unfortunately.

> Finally, if proxies are really required to buffer an unknown number of 
> frames before forwarding any of them, then I expect a lot will just 
> opt for the easy path of closing any connection that attempts to send 
> a continuation frame.

As I said, I hope that they don't.  I hope that they look at the size of the header fields (before or after compression, depending on how they are built).  After all, post-decompression size is potentially much larger than you think.

But I do agree: I expect that a limit like 8k is in the right ballpark here - for most users and use cases.  I'm sure that you would be entirely justified in applying that limit.  What I object to here is the implication that that is enough for everybody, and for every use case.

Received on Tuesday, 3 June 2014 21:02:48 UTC