Re: current HTTP/2 spec prevents gzip of response to "Range" request

On 25 March 2014 06:42, Martin Thomson <martin.thomson@gmail.com> wrote:

> On 24 March 2014 13:19, Roy T. Fielding <fielding@gbiv.com> wrote:
>
> > If HTTP/2 doesn't allow compression transfer encodings, then it doesn't
> > allow compression by intermediaries. I assumed that TE was replaced by
> > a framing mechanism that indicates the payload has been compressed.
> > If that isn't so, then HTTP/2 will be less efficient than HTTP/1 for
> > some use cases (like CDNs).
>
> We had a long series of discussions on this point.  It may be that we
> failed to trigger the necessary reactions from folks at the time and
> this needs to be reopened.  I will not attempt to defend the process
> we took.
>
> 0. SPDY had a frame to indicate that frames were compressed
> http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00#section-2.2.2
>
> 1. it was removed after some discussion:
> https://github.com/http2/http2-spec/issues/46
>
> http://http2.github.io/http2-spec/#changes.since.draft-ietf-httpbis-http2-01
> (point 2; -01 was 2013-01 FYI)
>
> 2. we discussed it more on-list
> http://lists.w3.org/Archives/Public/ietf-http-wg/2013AprJun/0865.html
>
>
If I'm reading it right: TE:gzip was removed because servers were wasting
resources recompressing things that were already CE:gzip, and because
people wanted both a "transfer-length"-type field to do their download
progress meters and a "content-length"-type field to do allocations and
whatnot. Those both seem like they have better solutions than just "no
TE:gzip".

I'm pretty sure I read through the entire discussion in point 2, and that
whole thread seemed to be people talking at cross purposes with confusion
between TE and CE.

I'm completely fine with continuing to mandate support for TE:chunked,
however it seems like a regression to forbid other codings (such as
compression) when they may have had support in HTTP/1.1. Yes,
TE/Transfer-Encoding is a connection header, i.e. it's hop-by-hop, and if
there's a naive proxy in the way you'll lose it (and if it's a *really* naive
proxy you'll break stuff), but that's already the case, no?

On the other side, I'm not sold on a single "gzip" bit. As people have
said, gzip is widely supported but it's not very good, and I don't like
being further tied to it in the protocol. Tied to it by convention, fine,
but not in MUST (or even SHOULD) level requirements. I'm of the opinion
that an extensible header, which can support arbitrary (and experimental)
codings, is much better, despite the fact that no one seems to get the
difference between CE and TE headers. I hope I'm not picking a fight with
proxy people here; they already have to unpack all the headers to handle
cache directives and whatnot, no? So asking them to strip/modify a
TE/Transfer-Encoding header at the same time if they know that one end
doesn't play nicely doesn't seem too unreasonable.

On my soap box, I really wish we could just do away with CE. I'm happy for
foo.html.gz to be transferred as "Content-Type: application/x-gzip" instead
of "Content-Type: text/html\r\nContent-Encoding: gzip". However I know
that's never going to happen (and it's outside HTTP/2's charter anyway).

For the record: I routinely configure Apache to serve offline-compressed
versions of files [1], which I believe is the Right Way(tm) to do CE; and I've
written an API[2] that allows resources to be dynamically compressed, and
it jumps through hoops to provide content-encoded gzip (i.e. the Wrong
Way(tm)) because no one seems to do TE and I didn't want my code to go to
waste.

[1]: https://gist.github.com/phluid61/9750448
[2]: 
https://github.com/QUTlib/rest-rmr/blob/master/system/response.inc.php#L758-863


-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Monday, 24 March 2014 22:21:33 UTC