W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: TLS over http2 frames

From: Ben Burkert <ben@benburkert.com>
Date: Fri, 15 Aug 2014 12:29:45 -0700
To: Mark Nottingham <mnot@mnot.net>
Cc: "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-ID: <etPan.53ee5fa9.6b8b4567.50fc@HackBook-Pro.local>
Mark, thanks for the great response.

Note that we already specify how to use CONNECT over HTTP/2: 
http://http2.github.io/http2-spec/#CONNECT 

... which does allow tunnelling TLS over a HTTP/2 stream. That doesn't directly address your needs, however. 
Yes, that section on using CONNECT to start a TCP proxy over DATA frames was what got me thinking about the problem of end-to-end encryption through an intermediary. What I'm proposing is more akin to a two layer onion router where the outside layer is terminated and re-encrypted at every hop and the inside layer is an end-to-end circuit.

* Encapsulating TLS (or something with similar properties) inside of HTTP doesn't imply that you can cache it (one of the biggest benefits of a CDN). TLS uses a session key that changes from client to client (and often between different connections of the same client); you can't replay a response from client A to client B, because it uses a different session key. 

I agree that the intermediary trying to replay encrypted responses is a bad idea. The abstraction I'm suggesting is to provide the server with two layers for sending a response on. If the server wants the intermediary to cache a response it would send that response over the outside layer with the appropriate caching headers. If the response has sensitive data that must be encrypted end-to-end then the server would send the response over the inside layer.

* You could take alternative approaches, but any scheme that allows multiple people to read the same encrypted content potentially leaks individuals' activities to the intermediary, which has serious privacy consequences if you're aiming for end-to-end. e2e integrity, OTOH, is very possible, but it adds more key management (see below). 
Right, it would be the responsibility of the client & server to ensure that data is sent over the appropriate layer, which complicates their implementations. But an extension that provides end-to-end encryption through an intermediary may be worth the extra complexity. Especially if it allows CDNs to switch from sharding to an intermediary strategy without sacrafising end-to-end encryption between the client & server.

* Adding another layer like this adds a considerable amount of complexity, and security folks get very concerned about the properties of the resulting protocol as a result; more complexity means more opportunities for a successful attack. 

That's not to say that it's impossible, just that there are some significant barriers to overcome, and we're not at a place where standardising something like this would do much good, because it's very likely we wouldn't see broad implementation in this round of work. 
Aboslutely. While the idea of layering TLS sessions has been shown to work in principle by the tor project, an extension like this would certainly complicate the security model for browsers & servers. Hopefully the extension mechanism makes it possible to iterate on these problems (mostly) independently from the main protocol specification.

I proposed the extension because my take away from reading the spec is that for CDNs to take full advantage of http2 they should act as intermediaries, and I haven't seen any resources or discussion about the problems that intermediaries pose to end-to-end encryption. My premise might be completely wrong, perhaps CDNs stick to the out-of-band/sharding strategy with http2. I would still argue that providing a mechanism for smarter intermediaries without sacrafising end-to-end encryption would open the door to new types services for improving site performance.

Cheers, -Ben


On August 14, 2014 at 8:36:31 PM, Mark Nottingham (mnot@mnot.net) wrote:

Hi Ben,  

On 15 Aug 2014, at 12:28 pm, Ben Burkert <ben@benburkert.com> wrote:  

> Hello,  
>  
> I believe that there is a need for an http2 extension that allows for TLS over http2 frames. An extension to support simultaneous encryption between the client & intermediary and end-to-end encryption between the client & server on the same connection. The use case is analogous to browsers that send HTTPS requests over an encrypted VPN connection.  

Note that we already specify how to use CONNECT over HTTP/2:  
http://http2.github.io/http2-spec/#CONNECT  

... which does allow tunnelling TLS over a HTTP/2 stream. That doesn't directly address your needs, however.  

> The CDN industry has largely evolved around the constraints of the modern web protocols; especially wrt http1.1 and https. In a number of ways the proposed http2 protocol has been designed to address flaws & oversights in http1.1 that made CDNs a necessity.  

I don't know of anyone saying that HTTP/2 will get rid of CDNs; they're largely complementary.  

> A large amount design effort has been focused on reusing and optimizing a single client/server connection. This is at odds with the CDN practice of serving a website's assets through an out-of-band connection to the CDN's network.  

Common practice for CDNs today is to CNAME the hostname over to the CDN, so that all requests can be served through it. This isn't universal, of course, but it is very common.  

> As CDNs adapt to http2 their role may evolve into an intermediary layer that provide their service over the same http2 connection in between client and server. We are already seeing similar services offered by CDNs. For example, traffic to www.whitehouse.gov is served first through Akamai's network before requests reach the drupal backend servers. This type of service can benefit clients by providing high availability and expanded presence, but at a cost to security; end-to-end encryption is not possible. Encrypted traffic must be terminated by the intermediary and then re-encrypted on it's way to the server. This is the case for http1.1 as well as http2, which is at odds with http2's goal of improved security.  

Just to be clear -- while we have done some things to improve security in HTTP/2, it's not a chartered goal.  

> An extension to allow TLS over frames addresses this problem by providing for a second layer of encryption. The outside (existing) layer would be used for communication between the client/server and intermediary, while the inside layer would provide end-to-end encryption between the client and server. Requests and responses would also be layered: the outside request/response would contain headers shared (and actionable upon) by the intermediary and the other two parties. The inside request/response would be passed through blindly by the intermediary. To put it a different way, the outside request/response is there to help the intermediary route/cache/serve the inside request/response which it cannot see.  

This has been discussed a fair amount (albeit, mostly casually). There are a couple of big problems to address, though:  

* Encapsulating TLS (or something with similar properties) inside of HTTP doesn't imply that you can cache it (one of the biggest benefits of a CDN). TLS uses a session key that changes from client to client (and often between different connections of the same client); you can't replay a response from client A to client B, because it uses a different session key.  

* You could take alternative approaches, but any scheme that allows multiple people to read the same encrypted content potentially leaks individuals' activities to the intermediary, which has serious privacy consequences if you're aiming for end-to-end. e2e integrity, OTOH, is very possible, but it adds more key management (see below).  

* Adding another layer like this adds a considerable amount of complexity, and security folks get very concerned about the properties of the resulting protocol as a result; more complexity means more opportunities for a successful attack.  

That's not to say that it's impossible, just that there are some significant barriers to overcome, and we're not at a place where standardising something like this would do much good, because it's very likely we wouldn't see broad implementation in this round of work.  

Hope this helps,  

--  
Mark Nottingham https://www.mnot.net/  
Received on Friday, 15 August 2014 19:30:29 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:10 UTC