Re: Making Implicit C-E work.

On Tue, Apr 29, 2014 at 9:50 PM, Matthew Kerwin <matthew@kerwin.net.au>wrote:

> On Apr 30, 2014 10:07 AM, "Roberto Peon" <grmocg@gmail.com> wrote:
> >
> > On Tue, Apr 29, 2014 at 3:46 PM, Matthew Kerwin <matthew@kerwin.net.au>
> wrote:
> >>
> >> On 30 April 2014 07:54, Roberto Peon <grmocg@gmail.com> wrote:
> >>>
> <snip>
> >>>
> >>> The proposal doesn't modify HTTP semantics, but preserves them while
> offering a needed base capability assurance.
> >>>
> >>
> >> Well, it changes a client's ability to request an uncompressed entity.
> >
> > An HTTP/1 client still receives an uncompressed entity if it didn't
> request it compressed.
> >
>
> It gives the gateway a bunch of extra hoops to jump through, and
> obligations to take on.
>

True it gives some additional obligations in the HTTP2<->HTTP/1 case,
though the characterization of "a bunch" is an exaggeration.


> >>
> >>>>
> >>>>>
> >>>>> I don't dispute that one could use T-E over HTTP/2 for this,
> assuming that it was end-to-end (which, unfortunately, it will not be for
> quite some time).
> >>>>>
> >>>>
> >>>> ​​Beside the point. I'm happy for people to use (and abuse) C-E. I'm
> just not happy for us to promote or mandate it in the spec.
> >>>> ​​
> >>>
> >>> As I described it, its use by the originator of an entity is not
> mandated, instead behaviors are mandated of recipients when it IS used.​
> >>>​
> >>​
> >> Yeah, mandating it. Which I'm not happy about.
> >
> > Mandates support, not use.
> >
>
> Kind of the same thing, from the client's POV. Server's choice.
>

And today it is often neither the server nor the client's choice, which is
what is causing the pain. The client expresses that it wants gzip. The
intermediary doesn't do it because it makes numbers better, increases
throughput, or because they're too lazy to implement it., all at the cost
of the decreased user experience.


> <snip>
> >
> > The combination of intermediaries stripping a-e plus the competitive
> driver to deliver good experience/latency is causing interop failures today
> where servers will send gzip'd data whether or not the client declares
> support in a-e.
> >
>
> Wait, you're saying the whole motivator here is that servers don't comply
> with the protocol? So you're changing the protocol to accommodate them?
> That does not feel right to me, at all; it's not just blessing a potential
> misuse of C-E, it's wallpapering over a flat out abuse.
>
Partially.
I'm saying that intermediaries are doing things which are incenting
implementors to break compatibility with the spec, and that implementors
are doing so because it makes the users happy.
In the end, making the users happy is what matters, both commercially and
privately. The users really don't care about purity, and will migrate to
implementations that give them good/better user experience.

But even so, why do you have to fix it in HTTP/2? And why does it hurt h2
> to *not* fix it?
>

Compression is an important part of making latency decrease/performance
increase, and, frankly, there is little practical motivation to deploy
HTTP/2 if it doesn't succeed in reducing latency/increase performance.
Success isn't (or shouldn't be) defined as completing a protocol spec, but
rather, getting an interoperable protocol deployed. If it doesn't get
deployed, the effort is wasted. If it doesn't solve real problems, the
effort is wasted.

In any case, I cannot reliably deploy a T-e based compression solution.
T-e based compression costs too much CPU, especially as compared with c-e
where one simply compresses any static entity once and decompresses (which
is cheap) as necessary at the gateway.
T-e based compression isn't as performant in terms of compression/deflation
ratios.
Many deployed clients/servers wouldn't correctly support it.
T-e would require that any gateway acting as a loadbalancer/reverse proxy
would either need to know which resources it could compress,  or forces us
to not use compression. Knowing what resources to compress either requires
an oracle, or requires content authors to change how they author content
(*really* not likely to happen),

<snip>
>
> > The proxy, when forwarding the server's response to the HTTP/1 client,
> must ensure that the data is uncompressed when forwarding to the HTTP/1
> client since the client didn't ask for c-e gzip.
> >
>
> Cache-Control:no-transform explicitly forbids the proxy from altering the
> representation. It's not allowed to decompress it.
>
In fact what we're doing is offering two representations simultaneously.


> <snip>
> > I'll point out that if that was applied recursively, there'd be no need
> to do anything HTTP/2 because we could always let them keep doing what
> they've been doing.
> > One of the drivers of HTTP/2 is to seek out performance. If it doesn't
> deliver on that, it is not a useful effort. Ensuring that entity-bodies are
> compressed when it is safe to do so is an important driver of performance.
> >
>
> The line in the sand was drawn at transport. You're trying to optimise
> entity representation. You can't fix all the web's woes with HTTP/2. Not
> even all the ones you care about.
>
Lol! This is one that we've already 'fixed' (and has been deployed for
years), and a bug was discovered. That is good-- we can fix the bug. You're
advocating for removing it in the pursuit of purity. I'm advocating for
patching it.

<snip>
-=R

Received on Wednesday, 30 April 2014 06:45:19 UTC