Re: Negotiating compression

The DoS surface with HTTP headers is not increased over HTTP/1 (which also
would suffer from a requirement to tear down the transport when that kind
if thing happens, unlike HTTP2).

I believe I've mentioned before seeing rare, but multi-Mb headers in the
past. We had, before that experience, amusingly assumed that 16k was enough
for anyone, and were wrong by orders of magnitude.

If our proxy sees something like this unexpectedly, it will terminate that
request. If it is expected, it is allowed through.

If one takes into account the resources needed for os/system and
cryptographic needs, you should find that HTTP2 uses less resources than
http/1 at just about any level of concurrency above 1, and even more so
when the receiver has set max state to zero.

Our experience also shows that the second order effects work to further
decrease connection load as people realize that domain sharding doesn't
help latency with HTTP2, and un-sharding ends up reducing connection count
by a factor of more than 6. This reduces CPU, memory, and bandwidth load.

-=R
On May 28, 2014 3:25 AM, "Greg Wilkins" <gregw@intalio.com> wrote:

>
> On 28 May 2014 03:59, David Krauss <potswa@gmail.com> wrote:
>
>> I’ve not implemented HPACK, but what works for a big server handling a
>> million connections should usually work for a tiny MCU.
>
>
> Indeed this is very true!
>
> As somebody who has implement a server handling a million connections,
> I've very much concerned by the resource requirements implied by HTTP/2 for
> a server.   Not only does a server have to commit to storing the headers
> that can result from a 16k compressed header frames, but it may receive
> unlimited CONTINUATION frames after that.
>
> Sure a server can opt not to accept large headers, but if HTTP/2 is going
> to facilitate a web where browsers can and do send such large headers, then
> all that would do would be for that server to opt out of the web.
>
> I just do not see the need for the transport meta data channel for HTTP/2
> to grow beyond the current size.  After all, we are only trying to support
> what is done with HTTP/1.1 now, so 8K headers should be sufficient and any
> new applications for large metadata can put it in a data stream!
>
> So with my tin foil hat on, I see conspiracy!) I'm told nobody is going to
> send servers such big headers... so why then are going to such lengths to
> support them in the protocol?
>
> cheers
>
>
>
>
>
>
>
>
>
>
>
> --
> Greg Wilkins <gregw@intalio.com>
> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
> scales
> http://www.webtide.com  advice and support for jetty and cometd.
>

Received on Wednesday, 28 May 2014 17:40:52 UTC