W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: TLS over http2 frames

From: Mark Nottingham <mnot@mnot.net>
Date: Fri, 15 Aug 2014 13:36:24 +1000
Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <C422E65E-5A52-45BD-8066-88AA91AC5342@mnot.net>
To: Ben Burkert <ben@benburkert.com>
Hi Ben,

On 15 Aug 2014, at 12:28 pm, Ben Burkert <ben@benburkert.com> wrote:

> Hello,
> I believe that there is a need for an http2 extension that allows for TLS over http2 frames. An extension to support simultaneous encryption between the client & intermediary and end-to-end encryption between the client & server on the same connection. The use case is analogous to browsers that send HTTPS requests over an encrypted VPN connection.

Note that we already specify how to use CONNECT over HTTP/2:

... which does allow tunnelling TLS over a HTTP/2 stream. That doesn't directly address your needs, however.

> The CDN industry has largely evolved around the constraints of the modern web protocols; especially wrt http1.1 and https. In a number of ways the proposed http2 protocol has been designed to address flaws & oversights in http1.1 that made CDNs a necessity.

I don't know of anyone saying that HTTP/2 will get rid of CDNs; they're largely complementary.

> A large amount design effort has been focused on reusing and optimizing a single client/server connection. This is at odds with the CDN practice of serving a website's assets through an out-of-band connection to the CDN's network.

Common practice for CDNs today is to CNAME the hostname over to the CDN, so that all requests can be served through it. This isn't universal, of course, but it is very common.

> As CDNs adapt to http2 their role may evolve into an intermediary layer that provide their service over the same  http2 connection in between client and server. We are already seeing similar services offered by CDNs. For example, traffic to www.whitehouse.gov is served first through Akamai's network before requests reach the drupal backend servers. This type of service can benefit clients by providing high availability and expanded presence, but at a cost to security; end-to-end encryption is not possible. Encrypted traffic must be terminated by the intermediary and then re-encrypted on it's way to the server. This is the case for http1.1 as well as http2, which is at odds with http2's goal of improved security.

Just to be clear -- while we have done some things to improve security in HTTP/2, it's not a chartered goal.

> An extension to allow TLS over frames addresses this problem by providing for a second layer of encryption. The outside (existing) layer would be used for communication between the client/server and intermediary, while the inside layer would provide end-to-end encryption between the client and server. Requests and responses would also be layered: the outside request/response would contain headers shared (and actionable upon) by the intermediary and the other two parties. The inside request/response would be passed through blindly by the intermediary. To put it a different way, the outside request/response is there to help the intermediary route/cache/serve the inside request/response which it cannot see.

This has been discussed a fair amount (albeit, mostly casually). There are a couple of big problems to address, though:

* Encapsulating TLS (or something with similar properties) inside of HTTP doesn't imply that you can cache it (one of the biggest benefits of a CDN). TLS uses a session key that changes from client to client (and often between different connections of the same client); you can't replay a response from client A to client B, because it uses a different session key. 

* You could take alternative approaches, but any scheme that allows multiple people to read the same encrypted content potentially leaks individuals' activities to the intermediary, which has serious privacy consequences if you're aiming for end-to-end. e2e integrity, OTOH, is very possible, but it adds more key management (see below).

* Adding another layer like this adds a considerable amount of complexity, and security folks get very concerned about the properties of the resulting protocol as a result; more complexity means more opportunities for a successful attack.

That's not to say that it's impossible, just that there are some significant barriers to overcome, and we're not at a place where standardising something like this would do much good, because it's very likely we wouldn't see broad implementation in this round of work. 

Hope this helps,

Mark Nottingham   https://www.mnot.net/
Received on Friday, 15 August 2014 03:36:51 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:14:37 UTC