Re: HTTP/2 Server Push and solid compression

On Tue, May 21, 2019 at 3:33 PM Alan Egerton <eggyal@gmail.com> wrote:
> I see two possible solutions:
>
> (1) standardise the bundle format in order that caches can separate
> and store the underlying resources: plenty of hazards here—especially
> since there will no longer be an HTTP response per resource, requiring
> metadata (including cache control etc) to be encoded somehow else.  My
> gut says this is probably a bad idea.
>
> (2) use a compression format that produces a separate output file for
> each input file, yet still achieves better overall compression than
> compressing the files individually: I imagine that this will produce
> an additional output file that is common to/referenced by all the
> compressed files being returned by that single operation;
> decompression of any of the transmitted resources would be achieved
> using only the common file and the resource-specific file as input.

Just following my own thoughts with an observation: in extremis, these
two approaches can actually become analogous.

For example, a .tar.gz could serve as both the standardised "bundle"
format (1) and the common output file (2) with the metadata
transmitted in the form of separate HTTP responses (1) whose payloads
reference the relevant constituent of that tarball (2).

I recognise that such an approach would also be a regression, because
it defeats the benefits of HTTP/2's multiplexing (the constituents of
the tarball only become available in sequence); therefore any solution
of type (2) must balance the competing requirements to minimise both
the "common file" and the overall size.  Perhaps there is no such
balance that yields material benefit over the status quo.

-- Alan

Received on Tuesday, 21 May 2019 15:17:12 UTC