- From: Mark Nottingham <mnot@mnot.net>
- Date: Tue, 25 Mar 2014 12:50:20 +1100
- To: Roy Fielding <fielding@gbiv.com>
- Cc: Martin Thomson <martin.thomson@gmail.com>, Bjoern Hoehrmann <derhoermi@gmx.net>, K.Morgan@iaea.org, HTTP Working Group <ietf-http-wg@w3.org>
On 25 Mar 2014, at 7:19 am, Roy T. Fielding <fielding@gbiv.com> wrote: > On Mar 24, 2014, at 10:04 AM, Martin Thomson wrote: > >> On 24 March 2014 09:53, Bjoern Hoehrmann <derhoermi@gmx.net> wrote: >>> Section 8.1.3. of the current draft is quite clear that implementations >>> must not indicate acceptance of or support for transfer encodings using >>> the `TE` header; it seems clear that `Transfer-Encoding: gzip, chunked` >>> is not supposed to be used in HTTP/2.0, which is exactly the issue here. >> >> That's because Transfer-Encoding is a hop-by-hop thing and the >> non-standard "gzip" Transfer-Encoding would need to be understood by >> all hops in the chain if it were to be used for ranges. I'm not aware >> of any use of Transfer-Encoding other than "chunked", because it's >> virtually impossible to define a new one that interoperates. >> >> As I said, I think that this is a confusion between two different but >> oft-confused headers: >> >> Content-Encoding: gzip >> Transfer-Encoding: chunked > > It seems you have confused them. Transfer Encoding is something > that can be added or removed by the protocol. Content Encoding is > metadata about the representation. If the protocol modifies a value > for CE, then it breaks anything that relies on the payload being > delivered without transformation (e.g., ranges and signature metadata). > > If HTTP/2 doesn't allow compression transfer encodings, then it doesn't > allow compression by intermediaries. ... or at least proxies. Since gateways have a relationship with the origin, they can (and often do) coordinate compression with it. > I assumed that TE was replaced by > a framing mechanism that indicates the payload has been compressed. > If that isn't so, then HTTP/2 will be less efficient than HTTP/1 for > some use cases (like CDNs). It'd be more accurate to say that it's less efficient *in theory*. Let's be clear; on the browsing Web, gzip and deflate transfer-codings are basically unused. CDNs do not use them, browsers do not support them, servers do not support them. E.g., <https://issues.apache.org/bugzilla/show_bug.cgi?id=52860>. > And, no, it isn't virtually impossible to introduce standard transfer > codings. It just requires effort by browsers to support one. > It also isn't necessary to restrict HTTP/2 features to what a current > browser supports. That's true. However, transfer-codings other than chunked have resolutely failed to catch on over the past ~15 years. Content-encoding -- with warts and all -- is by far the broadest current practice. We can certainly talk about re-introducing a flag to indicate that the payload of DATA is compressed. I don't see how we can require it to be used, however, since support for gzip transfer-codings is so poor on the existing Web. There are also the security considerations, a la <https://github.com/http2/http2-spec/issues/423>. Cheers, -- Mark Nottingham http://www.mnot.net/
Received on Tuesday, 25 March 2014 01:50:08 UTC