W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Our Schedule

From: Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>
Date: Tue, 27 May 2014 23:49:48 +0900
Message-ID: <CAPyZ6=LbA0MDyRmXYrSP5ebkQnKDyio0JdMZPKp=WwWaMu+zCA@mail.gmail.com>
To: David Krauss <potswa@gmail.com>
Cc: Greg Wilkins <gregw@intalio.com>, Mark Nottingham <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
On Tue, May 27, 2014 at 9:27 AM, David Krauss <potswa@gmail.com> wrote:

> On 2014–05–26, at 10:07 PM, Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com>
> wrote:
> ​Server and intermediaries can respond 431 if incoming headers are too
> large and refuse to forward to the shared backend connection.
> What is the correct behavior of a proxy that gets excessive headers from a
> server?
​There are several actions proxy can take:
1) Ignore excessive headers
3) GOAWAY (if one header is too large to handle and keep in header table)

In framing protocol wise, all are correct behavior.​  Proxy has right to
protect its own.

> What if the header stream takes too long, without being particularly
> large? Then the problem is not too many headers at all. Indeed, for a
> reverse proxy with a slow client, this is sure to happen as sure as
> streaming kicks in at all.
​This is not limited to HEADERS.  Just send 1 byte of first header of any
frame and pause.​
​This is good candidate for the timeout, isn't it?​

> I get the sense that HTTP/2 reverse proxies can’t use multiplexing
> upstream. Is anyone yet testing reverse proxies?
> Receiving CONTINUATION forever is unrealistic situation.
> Receiving CONTINUATION even once is unusual, if not unrealistic. The frame
> size was reduced simply to generate more CONTINUATIONs, so just as easily
> they could be all but eliminated. Yet header-streaming protocols come up as
> a necessary future use case. If they are not nice to have and we don’t want
> to support them, then a line should be drawn and a limit set.
> Are there any uses **in practice on the public internet** for headers
> longer than 64K as serialized by HTTP/1.1?
​I don't have actual statistics of length of the header.  But server
software have its own limit of header buffers.
For example, HAproxy uses 8K buffer for entire request headers by default
and its manual says it is not recommended to change the value, so limit is
somewhere near it?

Best regards,
Tatsuhiro Tsujikawa
Received on Tuesday, 27 May 2014 14:50:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC