W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Interleaving #481 (was Re: Limiting header block size)

From: Greg Wilkins <gregw@intalio.com>
Date: Tue, 3 Jun 2014 21:43:11 +0200
Message-ID: <CAH_y2NHiXWLNKB2gm2DxEemrDrTkUG85r5Z6UQyFm+wb_gTZ8Q@mail.gmail.com>
To: Martin Thomson <martin.thomson@gmail.com>
Cc: Simone Bordet <simone.bordet@gmail.com>, Michael Sweet <msweet@apple.com>, HTTP Working Group <ietf-http-wg@w3.org>
Martin,

I don't understand how a single frame can be of arbitrary size or even
significantly larger than the negotiated max header table size?

A received header block may contain either literal headers or indexes into
the header table.  If the header block is all huffman, than the max size is
about twice the frame size.  If the header block is all indexes, then there
is no additional memory commitment at all other than the strings already in
the header table and the points to them in the buffer that the reader has
already committed to read.    An efficient server is probably going to make
a random access structure from the buffer contents, but that too will have
a limited size as it will just contain references to the header table,
which is a memory commitment already made.

I'm glad that you agree with the fundamental idea that limits should not be
hidden and undiscovereable.    I also agree that hop by hop negotiation is
complex and of little use - because it just tells an application that it
can't work.  Instead it should be the other way around that the transport
should tell the application what it should do to always work over a
compliant http2 connection.

I get it that it is very frustrating when people like me come along and try
to over turn decisions that have been put to consensus before.  However I
have a responsibility to raise the concerns as I see them and have already
apologised for my late reentry in the process.   If having raised my
concerns, I fail to make my case, then I will hopefully not turn into a
chronic malcontent and accept the consensus and move on.

But I would also point out that the proposal I made above does support
arbitrary sized headers.  Just not for the initial headers of a HTTP
request.  So it is not totally at odds with the previous consensus.     It
is trying to both protect the intermediaries and servers from unlimited
commitments while allowing applications to send arbitrary larged meta data
- just separately from the transport meta data.

Finally, if proxies are really required to buffer an unknown number of
frames before forwarding any of them, then I expect a lot will just opt for
the easy path of closing any connection that attempts to send a
continuation frame.     Are there any server / intermediary developers here
who plan to allow headers significantly larger than the existing defacto
standard of 8k?

regards


PS.  it would also be good to know if the small percentage of requests with
headers larger that 16k are actually valid requests.  For all we know they
are application errors sending stupidly large repeated cookies/headers that
are already rejected by the server as being too large.















On 3 June 2014 20:16, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 3 June 2014 11:06, Simone Bordet <simone.bordet@gmail.com> wrote:
> > Can you please expand in a more technical way the arguments of why it
> > is a bad idea, and how the existence of continuations is orthogonal to
> > header size ?
> > Making examples would help.
>
> A header block can contain any amount of actual data.  Anywhere from
> absolutely nothing (because it's all padding, or it's really short) to
> really ---ing gigantic (because it uses HPACK).
>
> Deciding that you want to reject a frame based on a signal that is so
> abstractly connected to the actual thing you are concerned is a bad
> idea.  It is essentially arbitrary (hence the date/RGB comment).
> Arbitrary rejections lead to all sorts of bad behaviour from clients
> trying to avoid arbitrary behaviour, up to and including cargo
> cult-type actions.
>
> (And yes, I'm aware of how this is an argument for having a known,
> deterministic way to know whether a request is acceptable before
> sending it, but, as I explained, I don't think that this is feasible.)
>



-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Tuesday, 3 June 2014 19:43:40 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC