W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Interleaving #481 (was Re: Limiting header block size)

From: Martin Thomson <martin.thomson@gmail.com>
Date: Tue, 3 Jun 2014 13:49:15 -0700
Message-ID: <CABkgnnXR+LS5Dt63R1Tm6dc0VD4p9x0kMZgo+ho=CzoriTLNvw@mail.gmail.com>
To: Greg Wilkins <gregw@intalio.com>
Cc: Simone Bordet <simone.bordet@gmail.com>, Michael Sweet <msweet@apple.com>, HTTP Working Group <ietf-http-wg@w3.org>
On 3 June 2014 12:43, Greg Wilkins <gregw@intalio.com> wrote:
> A received header block may contain either literal headers or indexes into
> the header table.  If the header block is all huffman, than the max size is
> about twice the frame size.  If the header block is all indexes, then there
> is no additional memory commitment at all other than the strings already in
> the header table and the points to them in the buffer that the reader has
> already committed to read.

Not really.  Since referenced entries can be evicted after they are
emitted, the actual size can be much larger (though perhaps with
careful reference counting, only a smidgeon over 32K + header table
size).

> I'm glad that you agree with the fundamental idea that limits should not be
> hidden and undiscovereable.    I also agree that hop by hop negotiation is
> complex and of little use - because it just tells an application that it
> can't work.  Instead it should be the other way around that the transport
> should tell the application what it should do to always work over a
> compliant http2 connection.

That is quite hard because you are potentially talking to an HTTP/1.1
hop, unaware of the limits imposed by other hops.

> I get it that it is very frustrating when people like me come along and try
> to over turn decisions that have been put to consensus before.

Sometimes.  But only occasionally, when the tone is off, and that sort of thing.

> But I would also point out that the proposal I made above does support
> arbitrary sized headers.

Not quite.  Arbitrary sized blocks, but not individual fields.  Long
URLs and cookies are a part of the landscape unfortunately.

> Finally, if proxies are really required to buffer an unknown number of
> frames before forwarding any of them, then I expect a lot will just opt for
> the easy path of closing any connection that attempts to send a continuation
> frame.

As I said, I hope that they don't.  I hope that they look at the size
of the header fields (before or after compression, depending on how
they are built).  After all, post-decompression size is potentially
much larger than you think.

But I do agree: I expect that a limit like 8k is in the right ballpark
here - for most users and use cases.  I'm sure that you would be
entirely justified in applying that limit.  What I object to here is
the implication that that is enough for everybody, and for every use
case.
Received on Tuesday, 3 June 2014 20:49:43 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC