Patrick,
On May 26, 2014, at 10:45 AM, Patrick McManus <pmcmanus@mozilla.com> wrote:
> ...
> I disagree. the fundamental value of http/2 lies in mux and priority, and to enable both of those you need to be able to achieve a high level of parallelism. Due to CWND complications the only way to do that on the request path has been shown to be with a compression scheme. gzip accomplished that but had a security problem - thus hpack. Other schemes are plausible, and ones such as james's were considered, but some mechanism is required.
I see several key problems with the current HPACK:
1. The compression state is hard to manage, particularly for proxies.
2. HEADER frames hold up the show (issue #481)
3. There is no way to negotiate a connection without Huffman compression of headers (issue #485).
*If* we can come up with a header compression scheme that does not suffer from these problems, it might be worth the added complexity in order to avoid TCP congestion window issues. But given that we are already facing 3.5 RTTs worth of latency just to negotiate a TLS connection I'm not convinced that compressing the request headers will yield a user-visible improvement in the speed of their web browsing experience.
_________________________________________________________
Michael Sweet, Senior Printing System Engineer, PWG Chair