Re: Yet another trusted proxy suggestion

Hi Yoav,

Thanks for this. A few initial impressions below.


On 26 Nov 2013, at 11:16 pm, Yoav Nir <synp71@live.com> wrote:

> So after several threads, both here and and in private, I get a feeling that the opposition is more to breaking TLS than to having an HTTP(S) proxy.
> 
> Given that, I would like to lay out a a proposal for the flow of proxy detection and usage, without getting too low-level.
> 
> As an example, we'll assume there's a user, alice@example.com, who uses her computer to download something from https://download.adobe.com.  There are two proxies involved in this:
> - one is a next-generation firewall: sslproxy.example.com
> - one is a CDN server called a1953.d.cdn.net
> 
> Note that under this scenario, a1953.d.cdn.net resolves to the same IP address (192.0.2.5) as download.adobe.com. Maybe this can be improved, but that's how CDNs work for now.

I’m not sure what “improved” is here. What are you trying to fix? 


> Another thing to note is that there are actually two entities in the first part. There's the proxy, which deals in HTTP(S), and then there's the firewall which prevents E2E communications. They may be co-located, but then don't have to be.

Right. Another way to say this is that connections can be refused at the HTTP layer, or at a lower layer, and clients need to deal with both.

HTTP refusals *could* be more information-rich, if we choose to enable that (as I touched upon in draft-notthingham-http-proxy-problem).


> Step #1
> =======
> The browser resolves download.adobe.com, and opens a connection to 192.0.2.5 port 443. The firewall blocks this. I'm not sure it it's preferable to block this with some ICMP or with a new TLS alert (so the firewall completes the 3-way TCP handshake, receives the ClientHello and only then sends the error), but I'm tending towards the latter. The new TLS alert is called "mandatory proxy" and contains the URL of that proxy: https://sslproxy.example.com:443

That seems like it’s changing TLS to accommodate proxying, which Stephen was vehemently against...

When I’ve thought about this use case, I’ve been assuming that the MITM/Portal/Firewall/Whatever would negotiate TLS with its *own* certificate and then present a HTTP status code whose semantic is explicitly “This message is from a part of your network infrastructure; it is not from the origin” so that the client could present/use it as such (importantly, for purposes of cert validity as well as source of content).

That seems to stay in HTTP, making Stephen happy, and also gives a richer, more developer-friendly path to proxy-to-UA communication…


> Step #2
> =======
> The browser consults local policy about whether such a proxy might be acceptable, and if so, opens a new TLS connection to the proxy and verifies the certificate. This allows adding some clever UX that shows the user what device is on-path, and also allows pre-configuring the trusted proxy based on name.

Nod. I get a little nervous about us defining such policy languages (been burned in the past), but OK.


> Step #3
> =======
> The browser sends a CONNECT command to the proxy (maybe that has to be enhanced as well?) to connect to https://download.adobe.com. The proxy tries to connect, and then either of two things happens:
> 1. a1953.d.cdn.net has a certificate for download.adobe.com - that is what we do today.
> 2. a1953.d.cdn.net has a certificate for a1953.d.cdn.net, and issues a "mandatory proxy" alert with its name.
> In the former case, things will work as today. In the latter case, I'm not sure how the proxy (or browser for that matter) can know that a1953.d.cdn.net is a trusted proxy for download.adobe.com. Having the private key is a good indication, but I think we want to get away from that.
> Either way, the connection is established

See other comments about CONNECT.

WRT #2, this sounds like it’s effectively “certificate delegation,” which has been discussed casually since Berlin (I think you were in on some of those discussions). There are some really interesting use cases for this, but we need to consider the security properties carefully.


> Step #4
> =======
> If there is some personalized information, the CDN server can open a connection to the real server. This time there is a pre-arrangement, so the names in the certificate may be different.

More detail here? 


> 
> Step #5
> =======
> Requests and responses go through the series of nodes. We should note that things like flow control are hop-by-hop, so the SETTINGS frame is not forwarded, but is hop-by-hop. We would also like to have some information for both client and server. Here's my suggestion for this:
> ClientInfo structure:
> - certificate chain sent by the client (if any)
> - ciphersuite used
> - this)proxy certificate chain.
> - subject (or SKID) of received server (or next proxy) certificate
> 
> ServerInfo structure:
> - certificate chain sent by the server (or another proxy)
> - ciphersuite used
> - subject (or SKID) or (this) proxy certificate
> 
> Each proxy sends a ClientInfo structure to the server. This could be done as a POST to a /.well-known URI or as a new special frame. I prefer the latter, but I don't know how well that will sit with several proxies trying to push the same resource to the server.
> Each proxy also sends a ServerInfo structure to the client. Again, this could be a pushed resource or a special frame. Same concern applies.

Is this all inside of HTTP/2? I.e., are we talking about new frame types for ClientInfo and ServerInfo (as per previous discussion)?


> 
> Step #6
> =======
> At the end of this, both client and server can construct the chain of certificate chains and the ciphersuite used in each leg of the trip.
> 
> TLS is not changed by this, but HTTPS is.
> 
> I'd love to hear comments on this. If there's interest I could write it up in a draft, but having been burned twice, I'd like to know there's interest first.

Definitely interesting.

Cheers,

--
Mark Nottingham   http://www.mnot.net/

Received on Tuesday, 26 November 2013 22:50:45 UTC