W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Our Schedule

From: Michael Sweet <msweet@apple.com>
Date: Mon, 26 May 2014 12:35:14 -0400
Cc: James M Snell <jasnell@gmail.com>, Cory Benfield <cory@lukasa.co.uk>, Mark Nottingham <mnot@mnot.net>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-id: <2EA9A8C5-ECA8-4551-AA70-05DEABCEA487@apple.com>
To: Patrick McManus <pmcmanus@mozilla.com>

On May 26, 2014, at 10:45 AM, Patrick McManus <pmcmanus@mozilla.com> wrote:
> ...
> I disagree. the fundamental value of http/2 lies in mux and priority, and to enable both of those you need to be able to achieve a high level of parallelism. Due to CWND complications the only way to do that on the request path has been shown to be with a compression scheme. gzip accomplished that but had a security problem - thus hpack. Other schemes are plausible, and ones such as james's were considered, but some mechanism is required.

I see several key problems with the current HPACK:

1. The compression state is hard to manage, particularly for proxies.
2. HEADER frames hold up the show (issue #481)
3. There is no way to negotiate a connection without Huffman compression of headers (issue #485).

*If* we can come up with a header compression scheme that does not suffer from these problems, it might be worth the added complexity in order to avoid TCP congestion window issues.  But given that we are already facing 3.5 RTTs worth of latency just to negotiate a TLS connection I'm not convinced that compressing the request headers will yield a user-visible improvement in the speed of their web browsing experience.

Michael Sweet, Senior Printing System Engineer, PWG Chair

Received on Monday, 26 May 2014 16:35:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC