Re: Our Schedule

What would be wrong with:

1. Opting for a significantly less complicated, less aggressive compression
strategy by default; and
2. Leveraging extensibility and negotiation in the framing layer so that
HPACK can be developed and experimented with independently.

There's nothing I've seen so far that convinces me that HPACK ought to be a
normative requirement in http/2.
On May 26, 2014 10:53 AM, "Mark Nottingham" <mnot@mnot.net> wrote:

> Michael,
>
> On 27 May 2014, at 2:35 am, Michael Sweet <msweet@apple.com> wrote:
>
> > Patrick,
> >
> > On May 26, 2014, at 10:45 AM, Patrick McManus <pmcmanus@mozilla.com>
> wrote:
> >> ...
> >> I disagree. the fundamental value of http/2 lies in mux and priority,
> and to enable both of those you need to be able to achieve a high level of
> parallelism. Due to CWND complications the only way to do that on the
> request path has been shown to be with a compression scheme. gzip
> accomplished that but had a security problem - thus hpack. Other schemes
> are plausible, and ones such as james's were considered, but some mechanism
> is required.
> >
> > I see several key problems with the current HPACK:
> >
> > 1. The compression state is hard to manage, particularly for proxies.
> > 2. HEADER frames hold up the show (issue #481)
> > 3. There is no way to negotiate a connection without Huffman compression
> of headers (issue #485).
> >
> > *If* we can come up with a header compression scheme that does not
> suffer from these problems, it might be worth the added complexity in order
> to avoid TCP congestion window issues.  But given that we are already
> facing 3.5 RTTs worth of latency just to negotiate a TLS connection I'm not
> convinced that compressing the request headers will yield a user-visible
> improvement in the speed of their web browsing experience.
>
> The previous discussion that Patrick was referring to has a lot of
> background.
>
> In a nutshell, he made an argument for header compression a while back (I
> can dig up the references if you like), where he basically showed that for
> a very vanilla page load, merely getting the requests out onto the wire
> (NOT getting any responses) would take something like 8-11 RTs, just
> because of the interaction between request header sizes and congestion
> windows. This assumes that the page has 80 assets (the average is not over
> 100, according to the Web archive), and request headers are around 1400
> bytes (again, not uncommon).
>
> In contrast, with compressed headers (his experiment was with gzip), you
> can serialise all of those requests into one RTT, perhaps even a single
> packet.
>
> This is a very persuasive argument when our focus is on reducing end-user
> perceived latency. It’s especially persuasive when you think of the
> characteristics of an average mobile connection.
>
> HPACK is not as efficient as gzip, and as we’ve said many times, our goal
> is NOT extremely high compression; rather, it’s safety. If we could ignore
> the CRIME attack, we would use gzip instead, and I don’t think we’d be
> having this discussion.
>
> Hope this helps,
>
> --
> Mark Nottingham   http://www.mnot.net/
>
>
>
>

Received on Monday, 26 May 2014 18:18:27 UTC