W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Semantics of HTTPS

From: Yoav Nir <ynir@checkpoint.com>
Date: Tue, 7 Aug 2012 10:30:04 +0300
To: "Adrien W. de Croy" <adrien@qbik.com>
CC: Mark Nottingham <mnot@mnot.net>, Willy Tarreau <w@1wt.eu>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <62E31B2C-2681-4A44-A3B3-55F59B21D0DC@checkpoint.com>

On Aug 7, 2012, at 1:39 AM, Adrien W. de Croy wrote:

> 
> I think we need to be clear what we are doing when we apply logic such 
> as
> 
> 1. TLS / HTTPS was not designed for inspection
> 2. therefore any inspection is a hack
> 3. therefore we should not allow/sanitise it
> 
> One could argue that 1. was a design failure (failure to cover all 
> requirements), and that it should just be fixed.  
> 
> One could also argue that hacks have as much right to be accepted as 
> anything else.  They exist for a purpose.
> 
> The real world REQUIRES inspection capability, for various reasons.
> 
> We can either ignore that requirement, and carry on with our arms race, 
> or come to some mutual agreement on how to deal with the very real and 
> in many (if not most) cases entirely legitimate requirement.

It's a strange arms race, because both sides have an ultimate weapon. The firewalls can block off anything. TLS inspection only makes it possible for them to be more granular. The users today have smart phones with cellular Internet (some have cellular dongles for their laptops), so they can go around the firewall.

> At the moment, it's starting to look uglier and uglier.  Major sites 
> such as FB / Google move to TLS (maybe just to reduce blockage at 
> corporate firewalls?).

I don't think that's the purpose. If it is, they fail twice:
 - first, because they can be filtered by IP address.
 - second, because TLS proxies are getting very common.

> I can't count how many customers ask me a week how to block https sites 
> esp FB, gmail, youtube and twitter.  It's pointless arguing whether 
> someone should do this or not, we don't pay for their staff down-time.

Sites can be blocked by IP address. You use TLS proxies to block more granular things, like blocking games on FB, or certain kinds of videos on youtube. 

> So we have MITM code in the lab.  Many others have deployed already.

It's been deployed for over six years.

> Next step if a site wants to do something about that is maybe start to 
> use client certificates.  
> 
> Anyone here from the TLS WG able to comment on whether there are plans 
> to combat MITM in this respect?  It's interesting to see the comment 
> about recent TLS WG rejection of support for inspection.

Well, the proposal that was rejected was mine. Client certificates don't work through a proxy, because while the client may trust certificates signed by the proxy, the server does not. My proposal would make this work, if all three of client, server, and proxy were changed to comply, but as Stephen said, the TLS working group does not want to do this.

> At the end of the day, the requirement is not going away, and it's only 
> my opinion, but I think we'd get something that 
> 
> a) works a lot better (more reliably)
> b) better reflects reality and allows users to make informed choices

Best we can do is something along the lines of "Your traffic to "www.mybank.com" is being decrypted an inspected by "sslproxy.example.com". Is this OK?"

Do you think this allows a user to make an informed decision? Usability studies suggest that the user will click on whatever button makes him get to www.mybank.com, without thinking about the implications. This may or may not be the correct decision, but changing browser UI to fit security geeks does not necessarily make sense.

> if we actually accepted the reality of this requirement and designed 
> for it.  IMO b actually results in more security.

Not surprisingly (as I have a proposal in this area): +1

> As for the issue of trust, this results in a requirement to trust the 
> proxy.  We don't have a system that does not require any trust in any 
> party.  We trust the bank with our money, we trust the CA to properly 
> issue certificates and to ensure safe keeping of their private keys.  
> Most people IME are quite happy to have their web surfing scanned for 
> viruses.  I don't see a problem with some real estate on a browser 
> showing that they are required to trust the proxy they are using, or 
> don't go to the site. 
> 
> Otherwise you have to inspect the certificate of every secure and 
> sensitive site you go to in order to check if it's signed by who you 
> expect (e.g. a CA instead of your proxy).  It's completely unrealistic 
> to expect users to do that, and history has shown that educating 
> end-users about the finer points of security is not easily done.

Yoav
Received on Tuesday, 7 August 2012 07:31:48 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 7 August 2012 07:31:55 GMT