W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

Re: Moving forward on improving HTTP's security

From: Willy Tarreau <w@1wt.eu>
Date: Thu, 14 Nov 2013 08:15:10 +0100
To: Roberto Peon <grmocg@gmail.com>
Cc: Frédéric Kayser <f.kayser@free.fr>, HTTP Working Group <ietf-http-wg@w3.org>
Message-ID: <20131114071510.GH10912@1wt.eu>
Hi Roberto!

On Wed, Nov 13, 2013 at 07:24:32PM -0800, Roberto Peon wrote:
> As far as I've seen, most small businesses get little enough traffic that
> they wouldn't notice any difference w.r.t CPU usage.

I agree on this point, but it cuts the world in two parts who see an
increased cost in different areas :
  - those who won't notice the difference in CPU usage but will clearly
    see a difference in complexity due to the new requirement of managing
    their certs ;

  - those who are large enough to correctly manage certs and who will
    definitely see a difference in CPU usage. Just an example, haproxy
    happily runs at 40 Gbps when transfering video streams on commodity
    hardware and I've seen people use it in prod with sustained 20 Gbps
    traffic. The same hardware cannot do more than 5 Gbps in AES-GCM
    which is the fastest cipher we found. They'll have to significantly
    multiply the number of machines for the same task then !

> .. and if it bothers them, they'd use HTTP/1.1 for web stuff, or are
> already doing so.

I'm not much worried about that in fact. "Legacy" web servers will continue
to exist for years and decades. I'm more concerned about the requirement of
TLS for HTTP/2 as a protocol instead of as an implementation, that some tend
to promote. HTTP/1 works on any stream, including unix sockets between
processes. If we limit HTTP/2 to just the outside, we lose the benefits of
its improvements end-to-end (eg: server push etc). Also HTTP is not solely
used by browsers but by many other products and we want to ensure that they
will still be able to benefit from this. Last point concerning browsers, for
having worked with application development teams, I know that the most painful
thing these persons face is to test their code. They'll definitely need a way
to unlock their browser to use HTTP/2 to the local server otherwise none of
the cool new features will really be adopted by complexity of testing.

> In any case, it is extremely likely that HTTP/2.0 on port 80 is nearly
> undeployable for the web today. There are too many MITM proxies out there
> that expect port 80 to carry only a subset of HTTP/1.1, and make a mess of
> anything else.

Which is one reason for playing with the "default" setting : default to 1.1
for HTTP and 2.0 for TLS makes sense to me. And acting like this saves us
from trying to optimise the Upgrade handshake. So we have it for legacy
usages (development, testing, non-browser uses, etc...) and we know that
from the protocol point of view, it's sane even if not optimal. And we can
use ALPN for TLS.

However we must absolutely improve proxy support so that there is incentive
for using explicit proxies with HTTP/2. This is critically important in mobile
environment where most of the time is spent between the browser and the ISP.
Having a TLS-only proxy definitely is a good thing and makes a lot of things
easier and safer.

> So, any web deployment of HTTP/2 that is going to be reliable WILL use
> encryption, and WILL incur the cost of encryption.

And pink pixel providers will remain on 1.1 for a very long time, while
they're very sensitive to site response time. Maybe the attempts to provide
2.0 in clear will come from them.

> .. as such, the only real question here is simply about authentication.
> 
> I do expect that we'll see HTTP/2.0 in the clear, but that would be inside
> of a VPN or other private network, and Mark's original email was talking
> about the web usecase.

Absolutely.

Willy
Received on Thursday, 14 November 2013 07:15:35 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:19 UTC