- From: Greg Wilkins <gregw@intalio.com>
- Date: Thu, 29 May 2014 11:52:42 +0200
- To: Amos Jeffries <squid3@treenet.co.nz>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAH_y2NGQbkVm3J=-J6XHFzH99--b1tORSj5imE7LAWafOLvbEw@mail.gmail.com>
On 29 May 2014 06:52, Amos Jeffries <squid3@treenet.co.nz> wrote: > Personally I am in favour of 64K limit on headers That is an enormous increase in the resource commitment required by servers - per stream!. It is at least 8x the defacto standard an that is not counting compression. Roberto says that the same 16k size limit has been applied to everything - which is not a bad idea. So why exclude the poor servers from this? Server must hold onto the all the headers to make them available throughout the request processing, so allowing 64KB of compressed headers, which could easily turn into close to much more than that, is a big commitment. The meta data channel required for transport of HTTP semantics is much smaller than that. 8KB does almost all cases - specially on the request side of things. Sure future protocols are probably going to want more and more meta data - but why do we have to make the transport meta data channel available for such future protocols. Let them open their own high priority stream, or send additional header sets within the data stream. Let's not open the flood gates on the transport meta data channel, complete with the special exclusions from flow control and segmentation just to make it even more attractive to use! regards -- Greg Wilkins <gregw@intalio.com> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales http://www.webtide.com advice and support for jetty and cometd.
Received on Thursday, 29 May 2014 09:53:10 UTC