- From: Willy Tarreau <w@1wt.eu>
- Date: Mon, 2 May 2011 08:13:10 +0200
- To: Mark Nottingham <Mnot@mnot.net>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Mon, May 02, 2011 at 11:46:42AM +1000, Mark Nottingham wrote: > <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/282> > > We talked about this briefly in Prague. Since then I've put together a straw-man proposal: > > Insert at end of p1 3.2: > > > HTTP does not place a pre-defined limit on the length of header fields, either in isolation or as a set. A server MUST be prepared to receive request headers of unbounded length and respond with the 413 (Request Entity Too Large) status code if the received header(s) would be longer than the server wishes to handle (see Section 8.4.14 of [Part2]). > > > > A client that receives response headers that are longer than it wishes to handle can only treat it as a server error. > > > > Various ad-hoc limitations on header length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support messages whose combined headers have 20,000 or more octets. > > Add section to p1 Security Considerations: > > > 11.5 Protocol Element Size Overflows > > > > Because HTTP uses mostly textual, character-delimited fields, attackers can overflow buffers in implementations, and/or perform a Denial of Service against implementations that accept fields with unlimited lengths. > > > > To promote interoperability, this specification makes specific recommendations for size limits on request-targets [ref] and blocks of header fields [ref]. These are minimum recommendations, chosen to be supportable even by implementations with limited resources; it is expected that most implementations will choose substantially higher limits. > > > > This specification also provides a way for servers to reject messages that have request-targets that are too long [ref] or request entities that are too large [ref]. > > > > Other fields (including but not limited to request methods, response status phrases, header field-names, and body chunks) SHOULD be limited by implementations carefully, so as to not impede interoperability. > > Thoughts? We can go further than this, of course, but IME request-target and headers are the big ones. Some servers have a per-header size limit, and developers sometimes have to allocate a string to store one header, or to limit their abilities to send too large headers (eg: Location, Set-Cookie). Maybe we could emit suggestions on this point too ? From what I recall, Apache has a 8kB per header limit, which is plenty for all uses. Also we could remind that if some headers are too large (eg: Set-Cookie), it's likely that the client will experience a very poor performance when uploading this header to fetch objects. Another point concerning the arbitrary 20kB size. Many hardware-based equipments have much lower limits (1-3 MSS = 1.5-4.5 kB). I used to run at 2kB in haproxy, which revealed too small after about 5 years of service. Now it's configurable and defaults to 15kB from a 16kB buffer, and I never received any complain with such a size. I was once notified by a user running at 7kB who had a bug in his application (generating headers in loops), and haproxy was the first one to trigger the limit. I always run at 7kB everywhere without any issue. >From my experience, it seems like the developers who care about performance don't abuse headers nor cookies, and the dirty ones who don't care about anything don't know how to manipulate headers. It's important to give rough figures because allocating space requires a lot of RAM. For instance, one million concurrent sessions at 20kB are 20GB of RAM. Some applications need to keep those headers for all the duration of the request (logging, data manipulation, ...). Thus I'd prefer that we suggest that we'd say something like "at least 4kB in space-constrained system, and at least 20kB for safer interoperability". Cheers, Willy
Received on Monday, 2 May 2011 06:13:37 UTC