W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2019

Re: HTTP/2 Server Push and solid compression

From: Felipe Gasper <felipe@felipegasper.com>
Date: Tue, 21 May 2019 13:06:45 -0400
Cc: ietf-http-wg@w3.org
Message-Id: <898C6E1B-73ED-4F13-B435-745680E662AB@felipegasper.com>
To: Alan Egerton <eggyal@gmail.com>

> On May 21, 2019, at 11:16 AM, Alan Egerton <eggyal@gmail.com> wrote:
> On Tue, May 21, 2019 at 3:33 PM Alan Egerton <eggyal@gmail.com> wrote:
>> I see two possible solutions:
>> (1) standardise the bundle format in order that caches can separate
>> and store the underlying resources: plenty of hazards here—especially
>> since there will no longer be an HTTP response per resource, requiring
>> metadata (including cache control etc) to be encoded somehow else.  My
>> gut says this is probably a bad idea.
>> (2) use a compression format that produces a separate output file for
>> each input file, yet still achieves better overall compression than
>> compressing the files individually: I imagine that this will produce
>> an additional output file that is common to/referenced by all the
>> compressed files being returned by that single operation;
>> decompression of any of the transmitted resources would be achieved
>> using only the common file and the resource-specific file as input.
> Just following my own thoughts with an observation: in extremis, these
> two approaches can actually become analogous.
> For example, a .tar.gz could serve as both the standardised "bundle"
> format (1) and the common output file (2) with the metadata
> transmitted in the form of separate HTTP responses (1) whose payloads
> reference the relevant constituent of that tarball (2).
> I recognise that such an approach would also be a regression, because
> it defeats the benefits of HTTP/2's multiplexing (the constituents of
> the tarball only become available in sequence); therefore any solution
> of type (2) must balance the competing requirements to minimise both
> the "common file" and the overall size.  Perhaps there is no such
> balance that yields material benefit over the status quo.

I’m reminded of WebSocket’s “permessage-deflate” extension, which includes parameters for controlling the retention of compression context from message to message.

Maybe such an approach for multiple compressed payloads in series would effectively eliminate the size difference?

Received on Tuesday, 21 May 2019 17:45:10 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:15:34 UTC