W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Interleaving #481 (was Re: Limiting header block size)

From: Greg Wilkins <gregw@intalio.com>
Date: Tue, 3 Jun 2014 15:26:10 +0200
Message-ID: <CAH_y2NEO=AeDbHNJMYDguON0DgjcFLSJ_HA=1gN_5iv98wZKAw@mail.gmail.com>
To: Roland Zink <roland@zinks.de>
Cc: Roberto Peon <grmocg@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>
On 3 June 2014 12:43, Roland Zink <roland@zinks.de> wrote:

> Welp, I can tell you that we do see headers, especially for responses that
> are > 64k.
> I can tell you that we do see headers that are > 64k, but not many. I even
> have seen a URL > 90 k.

So already with HTTP/1.1 applications are putting a lot of meta data into
the transport meta-data channel.

Now imagine what will happen when we make that meta data channel have
favourable transport characteristics above and beyond normal data
frames!    I predict it's use will explode and there will be lots of
pressure on intermediaries and servers to increase any limits they place on
header size.

So far, the only response I have seen to these concerns is to say that a
server/intermediary is free to unilaterally apply its own limit.  But I
fail to see how applying an unknown undiscoverable limit is in any way
better than a specification mandated (or negotiated) limit.

I think that the unknown resource limitation that HTTP/1.1 applications
face today is something that really should be solved in http2

Applying flow control and interleaving to headers would at least remove the
incentive to use meta data instead of data and would essentially put us in
the same situation as HTTP/1.1 is today.  This can be done with hpack by
restricting header sets to 1 frame and then aggregating sets until end of
headers.    This is a slight complication in the encoder, but not much.

Having a 1 frame limit on the size of HTTP headers I think is acceptable,
although it would be disruptive to a small number of applications.
However I'm certainly will to consider making an application specific meta
data channel available to such applications.  In fact we already allow meta
data frames to be interleaved with data frames, so surely it is just a
matter of migrating the few applications that need large meta data to use
this new mechanism on HTTP/2 rather than to encourage further flooding of
the transport meta data channel with application data.

In short I am proposing:

   - All hpack header blocks be limited in size to 1 frame, thus allowing
   arbitrary interleaving of frames.
   - Header sets from continuation frames be aggregated until
   END_OF_HEADERS is received, so arbitrarily large header sets can be
   transported as subsets.
   - Headers, continuations and push promises frames will be flow
   - The initial headers for a HTTP request is limited to 1 frame, this
   limits the memory commitment of the server accepting a stream.
   - Response headers, trailers and embedded meta data may be unlimited
   number of frames, so arbitrary large meta data can still be transported,
   but not as part of the HTTP request meta data.
   - Only the initial header frame will be made available to request
   handlers via the normal HTTP meta data semantics. This limits the servers
   memory commitments.
   - Applications that require large metadata associated with a
   request/response are encourage to developer/propose/standardize APIs to
   access the HTTP/2 capability for trailers and embedded meta data.


Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.
Received on Tuesday, 3 June 2014 13:26:44 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:31 UTC