W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2019

HTTP/2 Server Push and solid compression

From: Alan Egerton <eggyal@gmail.com>
Date: Tue, 21 May 2019 15:33:34 +0100
Message-ID: <CA+phaedE0m4LniC38GBkJ-M0gAph0LSSGhQ1ZWJE6k0UOFcokw@mail.gmail.com>
To: ietf-http-wg@w3.org
Dear all,

It's my understanding that one of the design goals of HTTP/2 Server
Push was to address issues in HTTP/1.1 that delayed page load and were
worked-around by resource "bundling" (e.g. as part of an offline
compilation step)?

Indeed (and especially when taken together with support for ES6
modules now in most browsers), HTTP/2 Server Push finally makes it
conceivable that web apps might in future be deployed without any such
bundling (and, in simple cases, perhaps any compilation step)

That said, bundling did (perhaps unintentionally) introduce another
advantage: solid compression—that is, "the compression of a
concatenation" (which is usually far more efficient than "the
concatenation of compressions").  Accordingly, if an HTTP/2 server
separately compresses each underlying source and transmits them as
separate resources, the total payload will likely be greater in size
than if the server were to somehow first combine those sources and
compress them as a single resource.  This difference can be very

I'm sure I'm preaching to the converted if I point out that such a
naive approach would be a definite regression: a clear advantage of
pushing resources separately is that they can each be cached without
any change to infrastructure; conversely, bundled resources get cached
as a bundle and requests for an intersecting set of resources cannot
be serviced (even partly) from the cache.

I see two possible solutions:

(1) standardise the bundle format in order that caches can separate
and store the underlying resources: plenty of hazards here—especially
since there will no longer be an HTTP response per resource, requiring
metadata (including cache control etc) to be encoded somehow else.  My
gut says this is probably a bad idea.

(2) use a compression format that produces a separate output file for
each input file, yet still achieves better overall compression than
compressing the files individually: I imagine that this will produce
an additional output file that is common to/referenced by all the
compressed files being returned by that single operation;
decompression of any of the transmitted resources would be achieved
using only the common file and the resource-specific file as input.

Perhaps I'm way off the mark, and neither approach is feasible—or
perhaps I am overstating the problem?  Is this an area that has been
explored already?

-- Alan
Received on Tuesday, 21 May 2019 14:34:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:15:34 UTC