Re: current HTTP/2 spec prevents gzip of response to "Range" request

On 25.03.2014 02:50, Mark Nottingham wrote:
> On 25 Mar 2014, at 7:19 am, Roy T. Fielding <fielding@gbiv.com> wrote:
>
>> On Mar 24, 2014, at 10:04 AM, Martin Thomson wrote:
>>
>>> On 24 March 2014 09:53, Bjoern Hoehrmann <derhoermi@gmx.net> wrote:
>>>> Section 8.1.3. of the current draft is quite clear that implementations
>>>> must not indicate acceptance of or support for transfer encodings using
>>>> the `TE` header; it seems clear that `Transfer-Encoding: gzip, chunked`
>>>> is not supposed to be used in HTTP/2.0, which is exactly the issue here.
>>> That's because Transfer-Encoding is a hop-by-hop thing and the
>>> non-standard "gzip" Transfer-Encoding would need to be understood by
>>> all hops in the chain if it were to be used for ranges.  I'm not aware
>>> of any use of Transfer-Encoding other than "chunked", because it's
>>> virtually impossible to define a new one that interoperates.
>>>
>>> As I said, I think that this is a confusion between two different but
>>> oft-confused headers:
>>>
>>> Content-Encoding: gzip
>>> Transfer-Encoding: chunked
>> It seems you have confused them.  Transfer Encoding is something
>> that can be added or removed by the protocol.  Content Encoding is
>> metadata about the representation.  If the protocol modifies a value
>> for CE, then it breaks anything that relies on the payload being
>> delivered without transformation (e.g., ranges and signature metadata).
>>
>> If HTTP/2 doesn't allow compression transfer encodings, then it doesn't
>> allow compression by intermediaries.
Or decrompression in a HTTP/1.1 to HTTP2 gateway.
> ... or at least proxies. Since gateways have a relationship with the origin, they can (and often do) coordinate compression with it.
>> I assumed that TE was replaced by
>> a framing mechanism that indicates the payload has been compressed.
>> If that isn't so, then HTTP/2 will be less efficient than HTTP/1 for
>> some use cases (like CDNs).
> It'd be more accurate to say that it's less efficient *in theory*.
>
> Let's be clear; on the browsing Web, gzip and deflate transfer-codings are basically unused. CDNs do not use them, browsers do not support them, servers do not support them.
>
> E.g., <https://issues.apache.org/bugzilla/show_bug.cgi?id=52860>.
But you also find comments like in 
https://bugzilla.mozilla.org/show_bug.cgi?id=68517

Jon Hanna 2012-08-14 02:13:05 PDT

We started using content-encoding in cases where transfer-encoding is what we really want in the 1990s as a temporary kludge until the browsers added support, which would be any moment now...

>> And, no, it isn't virtually impossible to introduce standard transfer
>> codings.  It just requires effort by browsers to support one.
>> It also isn't necessary to restrict HTTP/2 features to what a current
>> browser supports.
> That's true. However, transfer-codings other than chunked have resolutely failed to catch on over the past ~15 years. Content-encoding -- with warts and all -- is by far the broadest current practice.
The question is why. For example Opera introduced support for gzip 
transfer encodings and then saw responses gziped twice (also mentioned 
in the above link).

> We can certainly talk about re-introducing a flag to indicate that the payload of DATA is compressed. I don't see how we can require it to be used, however, since support for gzip transfer-codings is so poor on the existing Web.
Isn't this hidden in the http2 stack? It shouldn't be visible outside 
the HTTP2 hop although I guess you will see content compressed twice in 
some cases.

> There are also the security considerations, a la <https://github.com/http2/http2-spec/issues/423>.
>
> Cheers,
>
>
>
> --
> Mark Nottingham   http://www.mnot.net/

Regards,

Roland Zink         http://home.zinks.de/

Received on Tuesday, 25 March 2014 09:40:15 UTC