W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2015

Re: 2 questions

From: Cory Benfield <cory@lukasa.co.uk>
Date: Mon, 30 Mar 2015 12:19:55 +0100
Message-ID: <CAH_hAJF3VZ20-mV+nBKuUOeHByOnD3q-z0+QKx3Vkb7qPgXzxw@mail.gmail.com>
To: Amos Jeffries <squid3@treenet.co.nz>
Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
On 30 March 2015 at 10:27, Amos Jeffries <squid3@treenet.co.nz> wrote:
> So your answer is "Just use HTTP/1.1" ?
>
> Regardless of how long the transition would take one of the goals of
> HTTP/2 is to replace it. *Any* network which is forced to stay with
> HTTP/1 simply because of a missing protocol capability is a failure of
> HTTP/2.

What protocol capability would that be? As I said far earlier in this
thread, HTTP/2 supports plaintext: Chrome and Firefox don't support
it. The protocol is capable: the implementations are not.

If the protocol suffers problems from intermediaries not understanding
HTTP/1.1, then yes, there was a failure in the protocol when we chose
to use TCP port 80 for plaintext. We can deal with that problem if and
when it does arise.

>> In this case I think Google and Firefox are probably right: HTTP/2 in
>> plaintext is likely to break frequently and mysteriously.
>
> Guesses and supposition. Look at who you are throwing those arguments at
> ... the very authors of the major middleware implementations.

I apologise for tarring with the same brush, that was never my
intention. However, I'm talking to authors of *two* of the major
middleware implementations. There are lots of them, some of which do
not support HTTP/2 and may never (varnish leaps to mind). Many
intermediaries will support HTTP/2 in plaintext well and cleanly:
those are not the ones I believe will cause problems. My worry is
*bad* middleware implementations that assume that all port 80 traffic
is HTTP/1.1, and therefore make unexpected modifications to traffic.
It is not unreasonable to want to avoid that problem by preventing
intermediaries seeing HTTP/2.

> The Chrome choice was based on SPDY metrics IIRC. Which measured how
> many connections over TLS were force to "just use HTTP/1.1" versus
> allowed to use SPDY. That was done under conditions where *none* of the
> middleware supported SPDY and TLS was able to supply a bypass.
>
> Neither of those measurement conditions are true for HTTP/2. We major
> middleware implementations authors participate in the WG and are
> actively implementing HTTP/2 already. The growth of TLS interception
> will undoubtedly have reduced TLS ability to bypass middleware.
>
>
> The middlware argument for TLS is a red herring.

That is as may be, but you're arguing with the wrong person. I've
already said I plan to support HTTP/2 in plaintext in my
implementation. I'm simply repeating what my concerns are with how
successful it will be, certainly in the short term. My response was to
a question asking why HTTP/2 requires TLS, and I was saying that the
protocol does not, but some implementations do.

> Injection of headers is compliant with HTTP (both versions).

Sure is, but I was talking about doing it *badly*, which is not the
same thing. For every good, up-to-date HTTP intermediary there are two
bad ones (usually written by cowboys like me). The same is true of
servers and clients, of course, but the difference is that bad servers
are under the control of site administrators (incentivised to improve
user experience) and bad clients are under the control of users
(incentivised to change to a working client). Bad intermediaries are
often transparent and under the control of an unrelated third party.

Obviously, this is a generalisation, but it certainly applies quite widely.

> One can as easily point at the many millions of users forced to endure
> horrible network lag issues and sometimes outright DoS when Chrome
> implemented SDCH encoding.
>
>
> Dont kid yourself about browsers protecting either users or websites -
> at least no more than they need to make gains in the browser wars. We
> have a loooong laundry list of things they refuse to do that would
> vastly improve end users privacy, security, and website efficiency.
> Their focus IME is towards their own corporate goals (as one should expect).

Yes, we can all accept blame here. The difference, as I mention above,
is in what those involved in an HTTP transaction can do. They have
more power over servers and clients than they do over intermediaries.

>> At this point in time, my HTTP/2 implementation does not support
>> plaintext HTTP/2. I will add support for it in the next few weeks, but
>> I do not expect it to work in the vast majority of cases, and will be
>> emitting warning logs to that effect.
>>
>
> Are you emitting similar warnings for all HTTP/2-over-TLS failures?
>  You will find a lot of middleware out there these days decrypting the
> TLS and demanding HTTP/1 inside. The magic PRI prefix "request" works
> the same regardless of TLS usage - as it was designed to.

Actually, I treat HTTP/2-over-TLS failures more aggressively: I throw
exceptions. This is primarily a security-conscious move, attempting to
maintain the semantics of HTTPS. At some stage I'll likely get a
feature request to allow this behaviour, but until that time I'm
holding HTTP/2-over-TLS to a higher standard than plaintext HTTP/2.
Received on Monday, 30 March 2015 11:20:27 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:43 UTC