W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Semantics of HTTPS

From: Adrien W. de Croy <adrien@qbik.com>
Date: Mon, 06 Aug 2012 23:37:15 +0000
To: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
Cc: "Mark Nottingham" <mnot@mnot.net>, "Willy Tarreau" <w@1wt.eu>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-Id: <em8c295a69-d3a9-4f37-9bbf-7bd9df07e0bc@bombed>

------ Original Message ------
From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
>Hiya,
>
>Some points below on what is a tricky issue but one where I think
>the status quo is better than the offered alternatives.
>
>On 08/06/2012 11:39 PM, Adrien W. de Croy wrote:
>
>>
>>
>>I think we need to be clear what we are doing when we apply logic such as
>>
>>1. TLS / HTTPS was not designed for inspection
>>2. therefore any inspection is a hack
>>3. therefore we should not allow/sanitise it
>>
>>One could argue that 1. was a design failure (failure to cover all
>>requirements), and that it should just be fixed.
>>One could also argue that hacks have as much right to be accepted as
>>anything else.  They exist for a purpose.
>>
>
>
>Yep. To break e2e security. But that's not a very defensible
>purpose in an organisation (the IETF) where the e2e argument is
>taken seriously.
>

what e2e security?  There is none currently with TLS/https.
>>
>>The real world REQUIRES inspection capability, for various reasons.
>>
>
>
>The real world also REQUIRES lawful intercept. But we (the IETF)
>don't do that, and we're right I think. (That is, I agree with
>our consensus position.)
>

so do I, and this proposal does not affect lawful intercept which would 
continue to operate in the way it currently does.

We're only proposing proxying of https, which is done with the 
knowledge of the client, and is not useful for covert intercepts.

>>
>>We can either ignore that requirement, and carry on with our arms race,
>>or come to some mutual agreement on how to deal with the very real and
>>in many (if not most) cases entirely legitimate requirement.
>>
>>At the moment, it's starting to look uglier and uglier.  Major sites
>>such as FB / Google move to TLS (maybe just to reduce blockage at
>>corporate firewalls?).
>>
>>I can't count how many customers ask me a week how to block https sites
>>esp FB, gmail, youtube and twitter.  It's pointless arguing whether
>>someone should do this or not, we don't pay for their staff down-time.
>>
>>So we have MITM code in the lab.  Many others have deployed already.
>>
>
>
>Well just block those sites if you must. I don't see why inspection
>is somehow better. I do see that some people might think inspection
>is better, but if that's a falsehood then no conclusion can be drawn.
>(False => anything, logically.) Evidence of the effectiveness of
>MITM inspection (vs. endpoint mechanisms) would be good, but seems
>to be missing.
>
I agree, I'd like to see some more evidence, e.g. rates of prevention 
of malware (which commonly uses https to retrieve payload).

But there are plenty of security software vendors who will tell you 
that malware spreads more and more with https.

They do a good job of convicing customers they need to scan https.

Plenty of intermediary vendors are more than happy to gain a 
competitive advantage by providing this service.

The horse has already bolted on this issue.
>>
>>Next step if a site wants to do something about that is maybe start to
>>use client certificates.
>>Anyone here from the TLS WG able to comment on whether there are plans
>>to combat MITM in this respect?
>>
>
>
>I don't get the question. TLS is designed to combat MITM with or
>without client certs. That's a fundamental requirement for TLS.
>

yet MITM proxies exist and continue to be deployed presumably 
successfully.  My question was whether the WG was taking aim at this 
problem (which would be an escalation of the arms race).

>>
>>It's interesting to see the comment
>>about recent TLS WG rejection of support for inspection.
>>
>
>
>Recent and repeated. I think this is maybe the 3rd time.
>

I don't know enough about all the intricate issues to know whether 
there is a potential profile for TLS that enables trusted 3rd party 
inspection or not.  But that is WAY OT for this list.

>>
>>At the end of the day, the requirement is not going away, and it's only
>>my opinion, but I think we'd get something that
>>a) works a lot better (more reliably)
>>b) better reflects reality and allows users to make informed choices
>>
>
>
>Feel free to propose a specification that meets your proposed
>requirements. That is hard-to-impossible IMO.
>

GET https://etc

>>
>>if we actually accepted the reality of this requirement and designed for
>>it.  IMO b actually results in more security.
>>As for the issue of trust, this results in a requirement to trust the
>>proxy.
>>
>
>
>You left out an important thing: it requires sites (e.g. a bank)
>to trust proxies the site has never heard of with the site's
>customer data, e.g. payment information.
>

Sure, my point was client-centric.  
However, banks are already in this situation.  I use online systems of 
3 banks.  None require me to use a client certificate.  Is this just a 
meteor waiting to hit?

They already are demonstrating either ignorance or trust of MITM 
proxies operated by client organisations.  I won't do them the 
disrespect of claimimg it's ignorance.

>
>
>Do we really want to engineer the web so as to allow a company
>proxy to prevent payments to the company's favourite bad cause?
>That's what's being enabled here. Its a bad plan.
>

It can already happen.  If we want to stop it, that's yet another 
direction to move in.  What we're proposing has no impact on that.

Adrien
>
>
>It might be tractable to figure how to get a user to trust her
>employer's proxy for some things, but that's just nowhere near a
>full solution IMO.
>
>Cheers,
>S
>
>
>>
>> We don't have a system that does not require any trust in any
>>party.  We trust the bank with our money, we trust the CA to properly
>>issue certificates and to ensure safe keeping of their private keys.
>>Most people IME are quite happy to have their web surfing scanned for
>>viruses.  I don't see a problem with some real estate on a browser
>>showing that they are required to trust the proxy they are using, or
>>don't go to the site.
>>Otherwise you have to inspect the certificate of every secure and
>>sensitive site you go to in order to check if it's signed by who you
>>expect (e.g. a CA instead of your proxy).  It's completely unrealistic
>>to expect users to do that, and history has shown that educating
>>end-users about the finer points of security is not easily done.
>>
>>
>>Adrien
>>
>>
>>------ Original Message ------
>>From: "Mark Nottingham" <mnot@mnot.net>
>>To: "Willy Tarreau" <w@1wt.eu>
>>Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
>>Sent: 7/08/2012 9:16:48 a.m.
>>Subject: Re: Semantics of HTTPS
>>
>>>
>>>On 06/08/2012, at 4:14 PM, Willy Tarreau <w@1wt.eu> wrote:
>>>
>>>
>>>
>>>>>
>>>>>
>>>>>Right. That's a big change from the semantics of HTTPS today,
>>>>>though; right
>>>>>now, when I see that, I know that I have end-to-end TLS.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>No, you *believe* you do, you really don't know. That's clearly the
>>>>problem
>>>>with the way it works, man-in-the middle proxies are still able to
>>>>intercept
>>>>it and to forge certs they sign with their own CA and you have no way
>>>>to know
>>>>if your communications are snooped or not.
>>>>
>>>>
>>>
>>>
>>>
>>>It's a really big logical leap from the existence of an attack to
>>>changing the fundamental semantics of the URI scheme. And, that's what
>>>a MITM proxy is -- it's not legitimate, it's not a recognised role,
>>>it's an attack. We shouldn't legitimise it.
>>>
>>>Cheers,
>>>
>>>--
>>>Mark Nottingham
>>>http://www.mnot.net/
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
Received on Monday, 6 August 2012 23:37:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 6 August 2012 23:37:48 GMT