W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2014

Re: current HTTP/2 spec prevents gzip of response to "Range" request

From: Mark Nottingham <mnot@mnot.net>
Date: Tue, 25 Mar 2014 12:50:20 +1100
Cc: Martin Thomson <martin.thomson@gmail.com>, Bjoern Hoehrmann <derhoermi@gmx.net>, K.Morgan@iaea.org, HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <0546C0B5-EE2C-4430-B6A7-CAB7AF904623@mnot.net>
To: Roy Fielding <fielding@gbiv.com>
On 25 Mar 2014, at 7:19 am, Roy T. Fielding <fielding@gbiv.com> wrote:

> On Mar 24, 2014, at 10:04 AM, Martin Thomson wrote:
>> On 24 March 2014 09:53, Bjoern Hoehrmann <derhoermi@gmx.net> wrote:
>>> Section 8.1.3. of the current draft is quite clear that implementations
>>> must not indicate acceptance of or support for transfer encodings using
>>> the `TE` header; it seems clear that `Transfer-Encoding: gzip, chunked`
>>> is not supposed to be used in HTTP/2.0, which is exactly the issue here.
>> That's because Transfer-Encoding is a hop-by-hop thing and the
>> non-standard "gzip" Transfer-Encoding would need to be understood by
>> all hops in the chain if it were to be used for ranges.  I'm not aware
>> of any use of Transfer-Encoding other than "chunked", because it's
>> virtually impossible to define a new one that interoperates.
>> As I said, I think that this is a confusion between two different but
>> oft-confused headers:
>> Content-Encoding: gzip
>> Transfer-Encoding: chunked
> It seems you have confused them.  Transfer Encoding is something
> that can be added or removed by the protocol.  Content Encoding is
> metadata about the representation.  If the protocol modifies a value
> for CE, then it breaks anything that relies on the payload being
> delivered without transformation (e.g., ranges and signature metadata).
> If HTTP/2 doesn't allow compression transfer encodings, then it doesn't
> allow compression by intermediaries.

... or at least proxies. Since gateways have a relationship with the origin, they can (and often do) coordinate compression with it.

> I assumed that TE was replaced by
> a framing mechanism that indicates the payload has been compressed.
> If that isn't so, then HTTP/2 will be less efficient than HTTP/1 for
> some use cases (like CDNs).

It'd be more accurate to say that it's less efficient *in theory*.

Let's be clear; on the browsing Web, gzip and deflate transfer-codings are basically unused. CDNs do not use them, browsers do not support them, servers do not support them. 

E.g., <https://issues.apache.org/bugzilla/show_bug.cgi?id=52860>.

> And, no, it isn't virtually impossible to introduce standard transfer
> codings.  It just requires effort by browsers to support one.
> It also isn't necessary to restrict HTTP/2 features to what a current
> browser supports.

That's true. However, transfer-codings other than chunked have resolutely failed to catch on over the past ~15 years. Content-encoding -- with warts and all -- is by far the broadest current practice.

We can certainly talk about re-introducing a flag to indicate that the payload of DATA is compressed. I don't see how we can require it to be used, however, since support for gzip transfer-codings is so poor on the existing Web.

There are also the security considerations, a la <https://github.com/http2/http2-spec/issues/423>.


Mark Nottingham   http://www.mnot.net/
Received on Tuesday, 25 March 2014 01:50:08 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:25 UTC