W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2014

Re: Making Implicit C-E work.

From: Matthew Kerwin <matthew@kerwin.net.au>
Date: Wed, 30 Apr 2014 20:55:29 +1000
Message-ID: <CACweHNCF6+QbLA1cGzh0-3443Juoo4jkQUpoT=V5KdE2b8fHRA@mail.gmail.com>
To: Roberto Peon <grmocg@gmail.com>
Cc: HTTP Working Group <ietf-http-wg@w3.org>
[snipping]

On Apr 30, 2014 4:44 PM, "Roberto Peon" <grmocg@gmail.com> wrote:
>
>> But even so, why do you have to fix it in HTTP/2? And why does it hurt
h2 to *not* fix it?
>
> Compression is an important part of making latency decrease/performance
increase, and, frankly, there is little practical motivation to deploy
HTTP/2 if it doesn't succeed in reducing latency/increase performance.

Do the other improvements in HTTP/2 not give those successes? Or are you
saying that they're not enough without ubiquitous payload compression?

>> > The proxy, when forwarding the server's response to the HTTP/1 client,
must ensure that the data is uncompressed when forwarding to the HTTP/1
client since the client didn't ask for c-e gzip.
>>
>> Cache-Control:no-transform explicitly forbids the proxy from altering
the representation. It's not allowed to decompress it.
>
> In fact what we're doing is offering two representations simultaneously.
>

It's a very messy way of doing it, though, and it makes me nervous. Too
many edge cases, too many potential holes. That's ok for a de facto
standard or something, but not for a formal IETF spec. And it strikes me as
a big disincentive to adoption.

To get it right would hold back HTTP/2 (unneccessarily, I say), so don't
rush it in there. Hold off for the next iteration. I'd be happy if HTTP/3
focused solely on fixing compression. I suspect others would be too.
Received on Wednesday, 30 April 2014 10:55:57 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:30 UTC