Re: current HTTP/2 spec prevents gzip of response to "Range" request

In message <CAOdDvNrOJXu8r86A1wyShaj94ccTcEZByLnEcE_=SWGj-E66aw@mail.gmail.com>
, Patrick McManus writes:

>I'm unlikely to implement a gzip transfer encoding decoder any time soon.

And I don't think Varnish will either.

Having gone through all that stuff recently, I am well aware of all the
trouble with C-E, but at least it is known trouble at this point in time.

I'd like to add one observation which I have not seen mentioned yet:

A big issue with T-E: gzip is that you need logic to recognize already
compressed content.

You can either do this by attempting compression and finding out that
it doesn't help, which requires access to about 1k+ of data before you
can trust the result, or by having a magic list of format markers
(see file(1)) to tell you, and be caught off-guard when somebody does
something new on the web.

Both is unattractive complications for very little gain.

Analytically, about the only reasons to use T-E: gzip is because the
origin server were either to stupid or too old to compress content
using C-E, and while a lot of origin servers fit that description,
I think we should just ignore them.

If we're talking "intelligent" lightbulbs, their bandwidth does not
matter and if we're talking old servers where load/bandwidth matters,
you can stick a Varnish in front of it and have that do CE: gzip
for you.

Compression is best done where the semantics and compressibility
of the content is known, attempting to fix it up later is a kludge.


-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Wednesday, 26 March 2014 07:51:18 UTC