W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: #282: Recommend minimum sizes for protocol elements

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 29 Jun 2011 21:57:17 +0200
To: Karl Dubost <karld@opera.com>
Cc: Mark Nottingham <mnot@mnot.net>, httpbis Group <ietf-http-wg@w3.org>, Poul-Henning Kamp <phk@phk.freebsd.dk>
Message-ID: <20110629195717.GG22233@1wt.eu>
On Wed, Jun 29, 2011 at 03:30:50PM -0400, Karl Dubost wrote:
> Le 24 juin 2011 à 03:26, Mark Nottingham a écrit :
> > Haven't heard much. If we s/20k/4k/ in the header section, any other comments / suggestions / concerns?
> What is the state of implementations right now?
> What servers, proxies, libraries do?
> Floor, ceiling, no limits, buffer overflows?

There are many different methods. Haproxy that I happen to know uses
a default limit of 7 or 8 kB (depening on build options) for the whole
headers, and is deployed that way at many places including some large
public sites. An older version used to default to 4kB for a long time,
but it was not as much deployed as today, so the lack of failure reports
would not count.

Apache 1.3, which I've worked a lot with used to support 8kB per line
and up to 101 lines (100 headers + request line) if my memory serves
me right. For others I don't remember.

The few times I got a report about haproxy blocking too large a message,
it was because of an application bug causing cookies to loop forever (eg:
server sends a set-cookie which appends the cookie value to all cookies
that were received).  

Of course we'll always find situations where default settings will have
to be changed whatever the component, but it seems like 4kB is a
reasonable value that many components will support without too much

Concerning the behaviour upon errors, rejecting the request is the
most common one. Buffer overflow is not expected but used to happen
in tiny servers embedded in cheap WiFi routers or NAS servers. I'd
suspect that some servers are able to automatically reallocate more
space until they run out of memory. But quite frankly the goal is
much more to suggest something which should fit most uses and work
almost everywhere than to guess how a given agent or server handles
the failure.

> I was wondering if Steve Souders could get us stats.

It's not easy to guess, you never know what components are installed
between you and the site you're testing. You can just measure what
you observe at large sites and on large proxies. And even then, you
don't necessarily know if some broken communications automatically
fall back to more reasonable values.

For instance, I have a customer who recently got some reports about a
cookie number limit that was apparently reached for a few customers.
As a result, they'll probably remove a few cookies. So as you can see,
there's an automatic regulation when recommended limits are about to
be crossed.

Received on Wednesday, 29 June 2011 19:58:13 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:13:52 UTC