Re: current HTTP/2 spec prevents gzip of response to "Range" request

On 26 March 2014 17:50, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:

> Compression is best done where the semantics and compressibility
> of the content is known, attempting to fix it up later is a kludge.
>

This is true. An origin is free to choose, using whatever prior knowledge
and magic it has at its disposal, whether it's appropriate to represent a
resource with various content-encoded entities, or to serve up the same
entity using transfer encodings.

Similarly, an intermediate device can choose, maybe based on the received
Content-Type and Content-Encoding, or on the nature of the data itself, or
even on what transfer encoding the upstream peer chose, whether it's
appropriate to apply transfer encoding downstream. The knowledge about
semantics and compressibility may be less than at the origin, but that's
what you've got. You don't *have* to do anything, after all.

It might be hard for a proxy to decide whether or not to apply compression,
but that's not really relevant to the decision on whether or not to allow
transfer encoding in the protocol.

Incidentally the moment the intermediate device decides to modify the
content-encoding in any way, that device becomes the origin for the
resulting new entity, without becoming the authority for the resource. This
is a bad thing, and potentially breaks the protocol. Thus if a proxy wants
to have any say at all in the compression of responses, it has to be done
through transfer encoding.

-- 
  Matthew Kerwin
  http://matthew.kerwin.net.au/

Received on Thursday, 27 March 2014 06:01:55 UTC