W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Addressing gzip focus

From: Daniel Sommermann <dcsommer@fb.com>
Date: Wed, 14 May 2014 15:22:36 -0700
Message-ID: <5373ECAC.6000804@fb.com>
To: HTTP Working Group <ietf-http-wg@w3.org>
Hi there,

I've noticed a lot of the discussion on the list and parts of the spec 
assume that gzip will be the compression algorithm everyone wants to use 
for a long time (see "gzip at the server" discussion, COMPRESSED DATA 
flag, HTTP/2 clients must support gzip).

I'm concerned that this puts HTTP/2 in a fragile state where it will not 
be able to make use of better, more secure, or faster compression 
algorithms in the future. There is good reason to believe gzip is not 
the best we can do. Gzip's 32 KiB window is relatively small. We've seen 
the effectiveness of shared dictionaries for headers, but we also don't 
have a way to easily change these dictionaries going forward. It's easy 
to imagine other dictionaries or algorithms could be used for different 
response content types to reap further savings.

Is there a way we can factor out (some of) these references into 
capabilities indicated by one side to the other via SETTINGS? E.g. 
SETTINGS_SUPPORTS_COMPRESSION with the value indicating the supported 
compression algorithm. The value -> compression protocol mapping could 
be registered via IANA for instance. SETTINGS_COMPRESS_DATA would then 
use 0 for uncompressed, 1 for gzip, and 2+ for any future registered 
compression algorithms+dictionaries advertised by the remote's 
SETTINGS_SUPPORTS_COMPRESSION. I realize this delays 
SETTINGS_COMPRESS_DATA at least 1 RTT to use "extended" compression 
algorithms, but for compressed responses I don't think that is too much 
of an issue.

Daniel
Received on Wednesday, 14 May 2014 22:23:01 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC