Re: Negotiating compression

On 28 May 2014 03:59, David Krauss <potswa@gmail.com> wrote:

> I’ve not implemented HPACK, but what works for a big server handling a
> million connections should usually work for a tiny MCU.


Indeed this is very true!

As somebody who has implement a server handling a million connections, I've
very much concerned by the resource requirements implied by HTTP/2 for a
server.   Not only does a server have to commit to storing the headers that
can result from a 16k compressed header frames, but it may receive
unlimited CONTINUATION frames after that.

Sure a server can opt not to accept large headers, but if HTTP/2 is going
to facilitate a web where browsers can and do send such large headers, then
all that would do would be for that server to opt out of the web.

I just do not see the need for the transport meta data channel for HTTP/2
to grow beyond the current size.  After all, we are only trying to support
what is done with HTTP/1.1 now, so 8K headers should be sufficient and any
new applications for large metadata can put it in a data stream!

So with my tin foil hat on, I see conspiracy!) I'm told nobody is going to
send servers such big headers... so why then are going to such lengths to
support them in the protocol?

cheers











-- 
Greg Wilkins <gregw@intalio.com>
http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that scales
http://www.webtide.com  advice and support for jetty and cometd.

Received on Wednesday, 28 May 2014 10:20:23 UTC