Re: #282: Recommend minimum sizes for protocol elements

On 22/06/2011, at 4:00 PM, Willy Tarreau wrote:

> Hi Mark,
> 
> On Wed, Jun 22, 2011 at 10:58:28AM +1000, Mark Nottingham wrote:
>> <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/282>
>> 
>> Combined proposal:
>> 
>> 
>> For HTTP headers, insert at end of p1 3.2:
>> 
>> """
>> HTTP does not place a pre-defined limit on the length of header fields, either in isolation or as a set. A server MUST be prepared to receive request headers of unbounded length and respond with the 413 (Request Entity Too Large) status code if the received header(s) would be longer than the server wishes to handle (see Section 8.4.14 of [Part2]).
>> 
>> A client that receives response headers that are longer than it wishes to handle can only treat it as a server error.
>> 
>> Various ad-hoc limitations on header length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support messages whose combined headers have 20,000 or more octets.
>> """
> 
> As we discussed one month ago on this subject, shouldn't we recommend even
> smaller sizes ? Developers who will find it normal to fill the 20kB with
> cookies will create totally unusable applications. The case I observed with
> 7kB of headers due to a buggy application making a cookie header repeat
> itself was perfectly unusable from the net. Common web sites have something
> like 80 objects per page on average nowadays, which means that you have to
> *upload* 1.6 MB of headers to fetch the whole page at 20kB/headers. On my
> ADSL line (1024/256), this takes 50 seconds of saturated uplink bandwidth.
> On an HSDPA 3G connection with 64kbps uplink, it takes 200 seconds, or 3m20
> to retrieve the whole page.

Please re-read the text; it's not recommending that people create large headers, but instead that implementers who choose to impose limits in their implementations do so consistently, with a floor.


> I'm insisting a bit on this because in the past, all ugliness I observed
> could be stopped because of interoperability issues. Developers storing
> all the browsing history of the user in a cookie had to stop doing so
> because an Alteon LB could not parse requests that did not fit in the
> first 1500 bytes for instance. Finally they fixed their application to
> store that large amount of data in the app-side session storage. If at
> this time they'd have seen the 20kB suggest, they would have stood on
> their positions, declaring the Alteon faulty.

I'd argue that Alteon is faulty for requiring requests to fit in 1500 bytes. I'm not saying that requests should be that big, but the fact is that on the Web today, they very often are, and implementations that impose limits at that point *will* cause interop problems.


> From my experience, 4kB of headers+request is already a lot and extremely
> rare. As Poul-Henning reported it, there are cases with much larger values
> on some internal networks, but that does not really count, since we can
> observe much more ugly specificities on internal enterprise networks (after
> all that's where IE6 still lives and where connection-based auth can be met).

My experience is that 10K is common in requests and responses; i.e., you don't see it in every site, but if you run a forward proxy, you'll see it every day.


> And for having used haproxy at 7kB for 7-8 years now, the only handful
> situations where it was not enough were due to application bugs that would
> not fit in the 20kB limit either.
> 
> So whatever we can do not to encourage ugliness should be done, and I think
> that suggesting 4kB would be much more net-friendly.

OK, that's a suggestion; what do other folks think?


--
Mark Nottingham   http://www.mnot.net/

Received on Wednesday, 22 June 2011 06:14:50 UTC