W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2014

Re: h2#404 requiring gzip and/or deflate

From: Michael Sweet <msweet@apple.com>
Date: Tue, 25 Feb 2014 14:56:49 -0500
Cc: Jesse Wilson <jesse@swank.ca>, Jeff Pinner <jpinner@twitter.com>, Mark Nottingham <mnot@mnot.net>, Zhong Yu <zhong.j.yu@gmail.com>, Roberto Peon <grmocg@gmail.com>, Patrick McManus <mcmanus@ducksong.com>, HTTP Working Group <ietf-http-wg@w3.org>
Message-id: <73B6A9AF-F107-47FD-97A0-0B628B8CDC77@apple.com>
To: Martin Thomson <martin.thomson@gmail.com>

On Feb 25, 2014, at 2:28 PM, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 25 February 2014 11:07, Michael Sweet <msweet@apple.com> wrote:
>> I'd have to check the old mailing list logs from 1998, but I think it wasn't intentional but a mistake based on the compression keyword values (none, compress, deflate, gzip) - the same sort of mistake that's been made by HTTP implementors due to the unfortunately confusion between zlib the format and deflate the encoding name being used when zlib is meant.
> Do you think that this is something that we can realistically fix?
> People have put up with bad "deflate" in HTTP/1.1, but do you think
> that there is any chance that we can address that problem in HTTP/2 by
> saying "deflate MUST be RFC1950, reject otherwise"?

Well, at the HTTP level you can certainly make that assertion/requirement.  My personal preference is to just require gzip support and document the known interoperability issues of deflate.

I don't think the handful of extra bytes matter - you are still making the message body smaller.

I also don't think the slightly slower speed of CRC32 vs. Adler32 matters - most of the time is spent actually compressing the data, not computing a sum of the compressed bytes.

I *do* think that interoperability will continue to be a bigger concern than raw performance.


General comment: there seems to be a lot of focus on getting the best compression in HTTP/2.0.  While this may be a worthwhile goal, I am concerned that it will both delay completion of HTTP/2.0 and result in a hard-to-implement protocol with serious interoperability issues.  IMHO the focus should be on addressing HTTP/1.1's major shortcomings, requiring or recommending existing features/technologies that already work well for HTTP/1.1, and limiting the number of variations that an implementation needs to work with, e.g., 1 compression algorithm vs. 2.

Today, gzip is probably the most widely implemented Content-Encoding for compressing data and has the fewest interoperability issues.  Requiring it in HTTP/2.0 definitely makes sense and will be a compelling improvement over HTTP/1.1.  Deflate/zlib have a known history of interoperability problems and provides almost identical performance characteristics to gzip, so it does not make sense to promote "deflate" to required for HTTP/2.0.

Michael Sweet, Senior Printing System Engineer, PWG Chair

Received on Tuesday, 25 February 2014 19:57:22 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:24 UTC