Re: Making Implicit C-E work.

On Apr 30, 2014 10:07 AM, "Roberto Peon" <grmocg@gmail.com> wrote:
>
> On Tue, Apr 29, 2014 at 3:46 PM, Matthew Kerwin <matthew@kerwin.net.au>
wrote:
>>
>> On 30 April 2014 07:54, Roberto Peon <grmocg@gmail.com> wrote:
>>>
>>> On Tue, Apr 29, 2014 at 2:45 PM, Matthew Kerwin <matthew@kerwin.net.au>
wrote:
>>>>
>>>> On 30 April 2014 07:33, Roberto Peon <grmocg@gmail.com> wrote:
>>>>>
>>>>> For better or worse, C-E is what is deployed today. Many of my
customers will not be writing custom servers, and as such to be deployable,
we need solutions that will work with what is out there.
>>>>> Otherwise, the feature is effectively only of theoretical use for the
majority of customers.
>>>>
>>>> Yes, C-E is what we have, but that's no reason to promote it to a MUST
support, and even less of a reason to promote it to a MUST support and
​have no way to opt out. Some people use C-E as a hack, some other people
disable the C-E as a hack; whose hack is the worse? You've made your call
there, but we don't all have to agree.
>>>>
>>>> There is no valid justification for modifying HTTP semantics, and
dictating how people use entities, in HTTP/2.
>>>
>>> The proposal doesn't modify HTTP semantics, but preserves them while
offering a needed base capability assurance.
>>>
>>
>> Well, it changes a client's ability to request an uncompressed entity.
>
> An HTTP/1 client still receives an uncompressed entity if it didn't
request it compressed.
>

It gives the gateway a bunch of extra hoops to jump through, and
obligations to take on.

>>
>>>>
>>>>>
>>>>> I don't dispute that one could use T-E over HTTP/2 for this, assuming
that it was end-to-end (which, unfortunately, it will not be for quite some
time).
>>>>>
>>>>
>>>> ​​Beside the point. I'm happy for people to use (and abuse) C-E. I'm
just not happy for us to promote or mandate it in the spec.
>>>> ​​
>>>
>>> As I described it, its use by the originator of an entity is not
mandated, instead behaviors are mandated of recipients when it IS used.​
>>>​
>>​
>> Yeah, mandating it. Which I'm not happy about.
>
> Mandates support, not use.
>

Kind of the same thing, from the client's POV. Server's choice.

>
>> ​
>>>
>>> 2) We know that software and vendors out there take shortcuts which
harm performance and have been harming interop.
>>
>> I'm just saying ​this isn't the time or the way to fix it. One battle at
a time. Use TCP better and make headers less bloaty now -- issues that
affect everyone on the web; worry about edge cases like stupid or malicious
proxies later.
>
> The combination of intermediaries stripping a-e plus the competitive
driver to deliver good experience/latency is causing interop failures today
where servers will send gzip'd data whether or not the client declares
support in a-e.
>

Wait, you're saying the whole motivator here is that servers don't comply
with the protocol? So you're changing the protocol to accommodate them?
That does not feel right to me, at all; it's not just blessing a potential
misuse of C-E, it's wallpapering over a flat out abuse.

But even so, why do you have to fix it in HTTP/2? And why does it hurt h2
to *not* fix it?

>
> I know of no case where the current (implicit gzip) behavior caused any
problems large enough to rise to the level of visibility. I *have* had to
deal with interop and performance issues because of intermediaries
stripping a-e gzip.
> Thusfar, (and I guess surprisingly) the *interop* problems of not having
gzip are larger than the interop problems of implicitly assuming it between
HTTP2 actors, etag wart and all.
>

Not that surprising, really. If the current behaviour (unblessed
assumption) caused more trouble than its gains, it wouldn't have stuck
around. Blessing the assumption doesn't fix it, though, not without
extending your proposal of including before-compression metadata (you
missed last-modified, but there could be other headers too, especially
proprietary extension headers)... and if you do that you end up with my
earlier idea of a level of compression somewhere between
entity-representation and transport. But because that's just too much work
to get right, and even harder to make interoperate with older HTTP, I
deferred it until HTTP 3+.

In fact only yesterday I set up a rough and ready wiki on github to track
my thoughts on the matter.

It's at <https://github.com/phluid61/http3> IIRC.

>>
>>>
>>> 3) We must continue to consider deployability and not just theoretical
usefulness-- the deployability of the protocol is the reason we've gotten
to where we are, and it doesn't make sense to stop thinking about it.
>>
>> ​Sure, so what does a 1.1->2 proxy do when a 1.1 client requests
Accept-Encoding:identity​ + Cache-Control:no-transform and the 2 server
responds with Content-Encoding:gzip ? That's currently possible, even if
unlikely, and we still have no suggestion at all for what the proxy is
supposed to do about it. Undocumented unlikely edge cases are the worst
kind, no? I think that makes the current draft undeployable for those
gateways, and they're the ones who are going to be doing a lot of the heavy
lifting of getting HTTP/2 out there to the masses.
>
> The proxy, when forwarding the server's response to the HTTP/1 client,
must ensure that the data is uncompressed when forwarding to the HTTP/1
client since the client didn't ask for c-e gzip.
>

Cache-Control:no-transform explicitly forbids the proxy from altering the
representation. It's not allowed to decompress it.

> This seems easy enough to document in the HTTP/2 spec if it isn't already
clear from the 1.1 spec, and the mechanism for doing so was specified in my
first email...
>
>>
>>>
>>> I have a real honest-to-goodness pragmatic deployment problem (myriad
pre-existing servers/clients whose deployment I do not and cannot control)
here that I cannot wish away, and this is a solution.
>>> Can you propose another solution to dealing with this problem which
solves the problem?
>>>
>>
>> Which problem, exactly? If it's that people are doing a certain thing in
HTTP/1.1, then the simplest solution is: let them keep doing it. No need to
add any words to the spec.​ If some proxy is stripping Accept-Encoding
headers, that's a raw deal for the users behind it, but it doesn't break
the internet. Those users would still see an improvement after switching to
HTTP/2.
>
> I'll point out that if that was applied recursively, there'd be no need
to do anything HTTP/2 because we could always let them keep doing what
they've been doing.
> One of the drivers of HTTP/2 is to seek out performance. If it doesn't
deliver on that, it is not a useful effort. Ensuring that entity-bodies are
compressed when it is safe to do so is an important driver of performance.
>

The line in the sand was drawn at transport. You're trying to optimise
entity representation. You can't fix all the web's woes with HTTP/2. Not
even all the ones you care about.

>
>>
>>>
>>> An HTTP2 without C-E based compression will not achieve some of the
objectives that it would with it.
>>>
>>
>> It's not without compression, it just doesn't mandate it. What we gain
in return is: not creating paradoxes in the protocol. Arguing pragmatism is
all well and good, but you still haven't fully addressed the gateway issue.
Theoretical holes have ways of becoming real, practical holes, and often
reveal massive security issues when they do so.
>>
>
> If one ensures that etag and content-length are sent as specified in my
proposal, then gateways will always produce the representation that the
HTTP/1 client requested. There is no paradox there.
>

You'd probably want to explicitly tell the proxies that it's ok to
partially ignore Cache-Control:no-transform, in that case. Even though that
*is* changing semantics.

Received on Wednesday, 30 April 2014 04:50:33 UTC