W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Our Schedule

From: Simone Bordet <simone.bordet@gmail.com>
Date: Mon, 26 May 2014 20:21:26 +0200
Message-ID: <CAFWmRJ1-34EyenmCxV4nP9my-rc5Yqif-DXcE_sbRux8s1vNZw@mail.gmail.com>
To: Mark Nottingham <mnot@mnot.net>
Cc: Michael Sweet <msweet@apple.com>, Patrick McManus <pmcmanus@mozilla.com>, James M Snell <jasnell@gmail.com>, Cory Benfield <cory@lukasa.co.uk>, Greg Wilkins <gregw@intalio.com>, HTTP Working Group <ietf-http-wg@w3.org>

On Mon, May 26, 2014 at 7:53 PM, Mark Nottingham <mnot@mnot.net> wrote:
> The previous discussion that Patrick was referring to has a lot of background.
> In a nutshell, he made an argument for header compression a while back (I can dig up the references if you like), where he basically showed that for a very vanilla page load, merely getting the requests out onto the wire (NOT getting any responses) would take something like 8-11 RTs, just because of the interaction between request header sizes and congestion windows. This assumes that the page has 80 assets (the average is not over 100, according to the Web archive), and request headers are around 1400 bytes (again, not uncommon).
> In contrast, with compressed headers (his experiment was with gzip), you can serialise all of those requests into one RTT, perhaps even a single packet.
> This is a very persuasive argument when our focus is on reducing end-user perceived latency. It’s especially persuasive when you think of the characteristics of an average mobile connection.
> HPACK is not as efficient as gzip, and as we’ve said many times, our goal is NOT extremely high compression; rather, it’s safety. If we could ignore the CRIME attack, we would use gzip instead, and I don’t think we’d be having this discussion.

Thanks for summing this up for a newcomer like me.
If the goal is safety, then also zero compression works.

May I ask what is plan B if a vulnerability is found in HPACK like it
happened for CRIME ?
Would that require HTTP 2.1 ?

My suggestion about negotiability of the header compression was in
this direction: simple implementations will disable it, and pay the
price Patrick showed. Better implementation will negotiate HPACK.
If HPACK is replaced or improved or found vulnerable, at least we can
run less efficient but safe until there is a review of only the
compression algorithm (and not of HTTP as a whole).

Thanks !

Simone Bordet
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless.   Victoria Livschitz
Received on Monday, 26 May 2014 18:21:53 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC