Re: current HTTP/2 spec prevents gzip of response to "Range" request

On 26.03.2014 08:50, Poul-Henning Kamp wrote:
> In message <CAOdDvNrOJXu8r86A1wyShaj94ccTcEZByLnEcE_=SWGj-E66aw@mail.gmail.com>
> , Patrick McManus writes:
>
>> I'm unlikely to implement a gzip transfer encoding decoder any time soon.
> And I don't think Varnish will either.
>
> Having gone through all that stuff recently, I am well aware of all the
> trouble with C-E, but at least it is known trouble at this point in time.
>
> I'd like to add one observation which I have not seen mentioned yet:
>
> A big issue with T-E: gzip is that you need logic to recognize already
> compressed content.
>
> You can either do this by attempting compression and finding out that
> it doesn't help, which requires access to about 1k+ of data before you
> can trust the result, or by having a magic list of format markers
> (see file(1)) to tell you, and be caught off-guard when somebody does
> something new on the web.
>
> Both is unattractive complications for very little gain.
>
> Analytically, about the only reasons to use T-E: gzip is because the
> origin server were either to stupid or too old to compress content
> using C-E, and while a lot of origin servers fit that description,
> I think we should just ignore them.
C-E does not allow seeks, e.g. you need to read 4G of a 8G file just to 
read the 100 bytes you are interested in.
> If we're talking "intelligent" lightbulbs, their bandwidth does not
> matter and if we're talking old servers where load/bandwidth matters,
> you can stick a Varnish in front of it and have that do CE: gzip
> for you.
>
> Compression is best done where the semantics and compressibility
> of the content is known, attempting to fix it up later is a kludge.
>
>

Received on Wednesday, 26 March 2014 09:31:24 UTC