W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2014

Re: Compression ratio of HPACK.

From: Jeff Pinner <jpinner@twitter.com>
Date: Fri, 31 Jan 2014 09:50:19 -0800
Message-ID: <CA+pLO_htfqEr9JfaR1KuaB1S=nBM_uBMZiRCQ=jphw_LEZ3GAA@mail.gmail.com>
To: Amos Jeffries <squid3@treenet.co.nz>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
On Fri, Jan 31, 2014 at 9:24 AM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 1/02/2014 5:22 a.m., Jeff Pinner wrote:
> > Just to sanity check I navigated to www.google.com -- the page load
> made 11
> > requests, each with the same 8 identical cookies. Thus the reference set
> in
> > this example saves 8 bytes per frame (as opposed to the 1.57 bytes in
> your
> > analysis).
> >
> > I'd suggest trying to capture data on a single domain to show the affect
> > the compression has on repeated cookies.
> >
>
> Here I was being pleased at how much like a real-world middleware
> traffic that test was.
>
> Good example results for what we middleware people have been saying
> about the compression, TLS, etc. having very little benefit compared to
> the amount of CPU/RAM resource consumption required to maintain the state.
>
>
For middleware boxes that do connection coalescing for multiple clients it
is almost certainly true that without the ability to divide the reference
set into groups (a feature that was dropped to its imagined complexity),
the best thing to do is to clear the reference set before each header set.
That's why the opcode was added to do just that.


> PS. The HTTP/2 test data so far appears to have been very biased towards
> traffic as seen on connections which are terminated at a origin server
> or browser / UA client simply by being grouped by :authority. Traffic as
> seen by middleware is much more volatile.
>
> Amos
>
>
>
Received on Friday, 31 January 2014 17:50:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:24 UTC