- From: Adrien W. de Croy <adrien@qbik.com>
- Date: Mon, 06 Aug 2012 22:39:09 +0000
- To: "Mark Nottingham" <mnot@mnot.net>, "Willy Tarreau" <w@1wt.eu>
- Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
I think we need to be clear what we are doing when we apply logic such as 1. TLS / HTTPS was not designed for inspection 2. therefore any inspection is a hack 3. therefore we should not allow/sanitise it One could argue that 1. was a design failure (failure to cover all requirements), and that it should just be fixed. One could also argue that hacks have as much right to be accepted as anything else. They exist for a purpose. The real world REQUIRES inspection capability, for various reasons. We can either ignore that requirement, and carry on with our arms race, or come to some mutual agreement on how to deal with the very real and in many (if not most) cases entirely legitimate requirement. At the moment, it's starting to look uglier and uglier. Major sites such as FB / Google move to TLS (maybe just to reduce blockage at corporate firewalls?). I can't count how many customers ask me a week how to block https sites esp FB, gmail, youtube and twitter. It's pointless arguing whether someone should do this or not, we don't pay for their staff down-time. So we have MITM code in the lab. Many others have deployed already. Next step if a site wants to do something about that is maybe start to use client certificates. Anyone here from the TLS WG able to comment on whether there are plans to combat MITM in this respect? It's interesting to see the comment about recent TLS WG rejection of support for inspection. At the end of the day, the requirement is not going away, and it's only my opinion, but I think we'd get something that a) works a lot better (more reliably) b) better reflects reality and allows users to make informed choices if we actually accepted the reality of this requirement and designed for it. IMO b actually results in more security. As for the issue of trust, this results in a requirement to trust the proxy. We don't have a system that does not require any trust in any party. We trust the bank with our money, we trust the CA to properly issue certificates and to ensure safe keeping of their private keys. Most people IME are quite happy to have their web surfing scanned for viruses. I don't see a problem with some real estate on a browser showing that they are required to trust the proxy they are using, or don't go to the site. Otherwise you have to inspect the certificate of every secure and sensitive site you go to in order to check if it's signed by who you expect (e.g. a CA instead of your proxy). It's completely unrealistic to expect users to do that, and history has shown that educating end-users about the finer points of security is not easily done. Adrien ------ Original Message ------ From: "Mark Nottingham" <mnot@mnot.net> To: "Willy Tarreau" <w@1wt.eu> Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org> Sent: 7/08/2012 9:16:48 a.m. Subject: Re: Semantics of HTTPS >On 06/08/2012, at 4:14 PM, Willy Tarreau <w@1wt.eu> wrote: > > >>> >>>Right. That's a big change from the semantics of HTTPS today, though; right >>>now, when I see that, I know that I have end-to-end TLS. >>> >> >> >>No, you *believe* you do, you really don't know. That's clearly the problem >>with the way it works, man-in-the middle proxies are still able to intercept >>it and to forge certs they sign with their own CA and you have no way to know >>if your communications are snooped or not. >> > > >It's a really big logical leap from the existence of an attack to changing the fundamental semantics of the URI scheme. And, that's what a MITM proxy is -- it's not legitimate, it's not a recognised role, it's an attack. We shouldn't legitimise it. > >Cheers, > >-- >Mark Nottingham >http://www.mnot.net/ > > > > > > >
Received on Monday, 6 August 2012 22:39:33 UTC