W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: #282: Recommend minimum sizes for protocol elements

From: Willy Tarreau <w@1wt.eu>
Date: Wed, 22 Jun 2011 08:00:21 +0200
To: Mark Nottingham <mnot@mnot.net>
Cc: httpbis Group <ietf-http-wg@w3.org>
Message-ID: <20110622060021.GE18843@1wt.eu>
Hi Mark,

On Wed, Jun 22, 2011 at 10:58:28AM +1000, Mark Nottingham wrote:
> <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/282>
> 
> Combined proposal:
> 
> 
> For HTTP headers, insert at end of p1 3.2:
> 
> """
> HTTP does not place a pre-defined limit on the length of header fields, either in isolation or as a set. A server MUST be prepared to receive request headers of unbounded length and respond with the 413 (Request Entity Too Large) status code if the received header(s) would be longer than the server wishes to handle (see Section 8.4.14 of [Part2]).
> 
> A client that receives response headers that are longer than it wishes to handle can only treat it as a server error.
> 
> Various ad-hoc limitations on header length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support messages whose combined headers have 20,000 or more octets.
> """

As we discussed one month ago on this subject, shouldn't we recommend even
smaller sizes ? Developers who will find it normal to fill the 20kB with
cookies will create totally unusable applications. The case I observed with
7kB of headers due to a buggy application making a cookie header repeat
itself was perfectly unusable from the net. Common web sites have something
like 80 objects per page on average nowadays, which means that you have to
*upload* 1.6 MB of headers to fetch the whole page at 20kB/headers. On my
ADSL line (1024/256), this takes 50 seconds of saturated uplink bandwidth.
On an HSDPA 3G connection with 64kbps uplink, it takes 200 seconds, or 3m20
to retrieve the whole page.

I'm insisting a bit on this because in the past, all ugliness I observed
could be stopped because of interoperability issues. Developers storing
all the browsing history of the user in a cookie had to stop doing so
because an Alteon LB could not parse requests that did not fit in the
first 1500 bytes for instance. Finally they fixed their application to
store that large amount of data in the app-side session storage. If at
this time they'd have seen the 20kB suggest, they would have stood on
their positions, declaring the Alteon faulty.

>From my experience, 4kB of headers+request is already a lot and extremely
rare. As Poul-Henning reported it, there are cases with much larger values
on some internal networks, but that does not really count, since we can
observe much more ugly specificities on internal enterprise networks (after
all that's where IE6 still lives and where connection-based auth can be met).

And for having used haproxy at 7kB for 7-8 years now, the only handful
situations where it was not enough were due to application bugs that would
not fit in the 20kB limit either.

So whatever we can do not to encourage ugliness should be done, and I think
that suggesting 4kB would be much more net-friendly.

Thanks,
Willy
Received on Wednesday, 22 June 2011 06:00:50 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:41 GMT