W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Semantics of HTTPS

From: Stephen Farrell <stephen.farrell@cs.tcd.ie>
Date: Tue, 07 Aug 2012 00:56:23 +0100
Message-ID: <502059A7.9000600@cs.tcd.ie>
To: "Adrien W. de Croy" <adrien@qbik.com>
CC: Mark Nottingham <mnot@mnot.net>, Willy Tarreau <w@1wt.eu>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>

Hiya,

On 08/07/2012 12:37 AM, Adrien W. de Croy wrote:
> 
> ------ Original Message ------
> From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
>> Hiya,
>>
>> Some points below on what is a tricky issue but one where I think
>> the status quo is better than the offered alternatives.
>>
>> On 08/06/2012 11:39 PM, Adrien W. de Croy wrote:
>>
>>>
>>>
>>> I think we need to be clear what we are doing when we apply logic
>>> such as
>>>
>>> 1. TLS / HTTPS was not designed for inspection
>>> 2. therefore any inspection is a hack
>>> 3. therefore we should not allow/sanitise it
>>>
>>> One could argue that 1. was a design failure (failure to cover all
>>> requirements), and that it should just be fixed.
>>> One could also argue that hacks have as much right to be accepted as
>>> anything else.  They exist for a purpose.
>>>
>>
>>
>> Yep. To break e2e security. But that's not a very defensible
>> purpose in an organisation (the IETF) where the e2e argument is
>> taken seriously.
>>
> 
> what e2e security?  There is none currently with TLS/https.

Disagree. There are mitm attacks. They are not ubiquitous.

>>>
>>> The real world REQUIRES inspection capability, for various reasons.
>>>
>>
>>
>> The real world also REQUIRES lawful intercept. But we (the IETF)
>> don't do that, and we're right I think. (That is, I agree with
>> our consensus position.)
>>
> 
> so do I, and this proposal does not affect lawful intercept which would
> continue to operate in the way it currently does.

Your argument was: "real world REQUIRES foo, therefore we MUST do foo."
That is demonstrably not the case with LI so this part of your
argument falls.

> 
> We're only proposing proxying of https, which is done with the knowledge
> of the client, and is not useful for covert intercepts.

Even though based on a bad argument, I don't agree with your last
conclusion.

>>> We can either ignore that requirement, and carry on with our arms race,
>>> or come to some mutual agreement on how to deal with the very real and
>>> in many (if not most) cases entirely legitimate requirement.
>>>
>>> At the moment, it's starting to look uglier and uglier.  Major sites
>>> such as FB / Google move to TLS (maybe just to reduce blockage at
>>> corporate firewalls?).
>>>
>>> I can't count how many customers ask me a week how to block https sites
>>> esp FB, gmail, youtube and twitter.  It's pointless arguing whether
>>> someone should do this or not, we don't pay for their staff down-time.
>>>
>>> So we have MITM code in the lab.  Many others have deployed already.
>>>
>>
>>
>> Well just block those sites if you must. I don't see why inspection
>> is somehow better. I do see that some people might think inspection
>> is better, but if that's a falsehood then no conclusion can be drawn.
>> (False => anything, logically.) Evidence of the effectiveness of
>> MITM inspection (vs. endpoint mechanisms) would be good, but seems
>> to be missing.
>>
> I agree, I'd like to see some more evidence, e.g. rates of prevention of
> malware (which commonly uses https to retrieve payload).

The evidence needed would not be "I can do x,y,z at the mitm" but
rather "I can do x,y,x NN% better at the mitm compared to the endpoint."

> But there are plenty of security software vendors who will tell you that
> malware spreads more and more with https.

Sure. Marketing exists. That's not a telling argument here.

> They do a good job of convicing customers they need to scan https.
> 
> Plenty of intermediary vendors are more than happy to gain a competitive
> advantage by providing this service.
> 
> The horse has already bolted on this issue.

People make money from LI products too. We do not need to prevent
all horses bolting in all directions. (And should not argue based
on metaphors alone.)

>>>
>>> Next step if a site wants to do something about that is maybe start to
>>> use client certificates.
>>> Anyone here from the TLS WG able to comment on whether there are plans
>>> to combat MITM in this respect?
>>>
>>
>>
>> I don't get the question. TLS is designed to combat MITM with or
>> without client certs. That's a fundamental requirement for TLS.
>>
> 
> yet MITM proxies exist and continue to be deployed presumably
> successfully.  My question was whether the WG was taking aim at this
> problem (which would be an escalation of the arms race).

"taking aim" is ambiguous so its hard to answer.

What has happened at least a couple of times was: vendor turns up,
says "our customers need and buy mitm so you need to standardised
it", tls wg says: no, we're here to provide transport layer
security not to break that.

I don't see any escalation there nor any arms race. (It is probably
true that instances of mitm attack are growing in number.)

> 
>>>
>>> It's interesting to see the comment
>>> about recent TLS WG rejection of support for inspection.
>>>
>>
>>
>> Recent and repeated. I think this is maybe the 3rd time.
>>
> 
> I don't know enough about all the intricate issues to know whether there
> is a potential profile for TLS that enables trusted 3rd party inspection
> or not.  But that is WAY OT for this list.

Yes. TLS won't be changed that way I believe.


> 
>>>
>>> At the end of the day, the requirement is not going away, and it's only
>>> my opinion, but I think we'd get something that
>>> a) works a lot better (more reliably)
>>> b) better reflects reality and allows users to make informed choices
>>>
>>
>> Feel free to propose a specification that meets your proposed
>> requirements. That is hard-to-impossible IMO.
>>
> 
> GET https://etc

My equally detailed rebuttal is: "That doesn't work." :-)

Next step, you, or someone, writes a real I-D.

>>> if we actually accepted the reality of this requirement and designed for
>>> it.  IMO b actually results in more security.
>>> As for the issue of trust, this results in a requirement to trust the
>>> proxy.
>>>
>>
>>
>> You left out an important thing: it requires sites (e.g. a bank)
>> to trust proxies the site has never heard of with the site's
>> customer data, e.g. payment information.
>>
> 
> Sure, my point was client-centric.  However, banks are already in this
> situation.  I use online systems of 3 banks.  None require me to use a
> client certificate.  Is this just a meteor waiting to hit?

I've no idea if your argument is tongue-in-cheek or not. Probably
better not to respond on that one before that's clear. (Banks issuing
client-certs is not currently a really tractable approach.)

S

> 
> They already are demonstrating either ignorance or trust of MITM proxies
> operated by client organisations.  I won't do them the disrespect of
> claimimg it's ignorance.
> 
>>
>>
>> Do we really want to engineer the web so as to allow a company
>> proxy to prevent payments to the company's favourite bad cause?
>> That's what's being enabled here. Its a bad plan.
>>
> 
> It can already happen.  If we want to stop it, that's yet another
> direction to move in.  What we're proposing has no impact on that.
> 
> Adrien
>>
>>
>> It might be tractable to figure how to get a user to trust her
>> employer's proxy for some things, but that's just nowhere near a
>> full solution IMO.
>>
>> Cheers,
>> S
>>
>>
>>>
>>> We don't have a system that does not require any trust in any
>>> party.  We trust the bank with our money, we trust the CA to properly
>>> issue certificates and to ensure safe keeping of their private keys.
>>> Most people IME are quite happy to have their web surfing scanned for
>>> viruses.  I don't see a problem with some real estate on a browser
>>> showing that they are required to trust the proxy they are using, or
>>> don't go to the site.
>>> Otherwise you have to inspect the certificate of every secure and
>>> sensitive site you go to in order to check if it's signed by who you
>>> expect (e.g. a CA instead of your proxy).  It's completely unrealistic
>>> to expect users to do that, and history has shown that educating
>>> end-users about the finer points of security is not easily done.
>>>
>>>
>>> Adrien
>>>
>>>
>>> ------ Original Message ------
>>> From: "Mark Nottingham" <mnot@mnot.net>
>>> To: "Willy Tarreau" <w@1wt.eu>
>>> Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
>>> Sent: 7/08/2012 9:16:48 a.m.
>>> Subject: Re: Semantics of HTTPS
>>>
>>>>
>>>> On 06/08/2012, at 4:14 PM, Willy Tarreau <w@1wt.eu> wrote:
>>>>
>>>>
>>>>
>>>>>>
>>>>>>
>>>>>> Right. That's a big change from the semantics of HTTPS today,
>>>>>> though; right
>>>>>> now, when I see that, I know that I have end-to-end TLS.
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> No, you *believe* you do, you really don't know. That's clearly the
>>>>> problem
>>>>> with the way it works, man-in-the middle proxies are still able to
>>>>> intercept
>>>>> it and to forge certs they sign with their own CA and you have no way
>>>>> to know
>>>>> if your communications are snooped or not.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> It's a really big logical leap from the existence of an attack to
>>>> changing the fundamental semantics of the URI scheme. And, that's what
>>>> a MITM proxy is -- it's not legitimate, it's not a recognised role,
>>>> it's an attack. We shouldn't legitimise it.
>>>>
>>>> Cheers,
>>>>
>>>> -- 
>>>> Mark Nottingham
>>>> http://www.mnot.net/
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>
> 
> 
> 
> 
Received on Monday, 6 August 2012 23:56:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 6 August 2012 23:56:54 GMT