breaking TLS (Was: Re: multiplexing -- don't do it)

On 04/02/2012 10:20 PM, Mike Belshe wrote:
> I was trying to describe https to the proxy, which breaks the SSL, and then
> initiates a new SSL connection to FB.
>
> I call this a "trusted proxy".  The browser in this case must have
> explicitly specified the proxy to use and also that it is okay to let it
> break SSL.

TLS MITM has been proposed before and rejected.

Even for FB, but definitely for banking, I don't want that middlebox
getting my re-usable credentials and I don't see how to avoid that
problem.

I do understand that there are percieved-real requirements here
for enterprise middleboxes to snoop but we've not gotten IETF
consensus to support that kind of feature in our protocols.

Stephen.

PS: I'm not angling for better http auth here. Even if we get that
there will be many passwords and other re-usable credentials in use
for pretty much ever and the argument against breaking TLS will
remain.


>
> Mike
>
>
>
>
>>
>> The proxy can still not see the facebook traffic in the clear so the admin
>> will still either need to block facebook entirely or do a MITM.
>>
>> Peter
>>
>> On Mon, Apr 2, 2012 at 5:11 PM, Mike Belshe<mike@belshe.com>  wrote:
>>
>>>
>>>
>>> On Mon, Apr 2, 2012 at 2:08 PM, Adrien W. de Croy<adrien@qbik.com>wrote:
>>>
>>>>
>>>> ------ Original Message ------
>>>> From: "Mike Belshe"<mike@belshe.com>
>>>> To: "Adrien W. de Croy"<adrien@qbik.com>
>>>> Cc: "Amos Jeffries"<squid3@treenet.co.nz>;"ietf-http-wg@w3.org"<
>>>> ietf-http-wg@w3.org>
>>>> Sent: 3/04/2012 8:52:22 a.m.
>>>> Subject: Re: multiplexing -- don't do it
>>>>
>>>>
>>>>
>>>> On Mon, Apr 2, 2012 at 1:43 PM, Adrien W. de Croy<  <adrien@qbik.com>
>>>> adrien@qbik.com>  wrote:
>>>>
>>>>>
>>>>> ------ Original Message ------
>>>>> From: "Mike Belshe"<mike@belshe.com>mike@belshe.com
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 2, 2012 at 6:57 AM, Amos Jeffries<  <squid3@treenet.co.nz><squid3@treenet.co.nz>
>>>>> squid3@treenet.co.nz>  wrote:
>>>>>
>>>>>> On 1/04/2012 5:17 a.m., Adam Barth wrote:
>>>>>>
>>>>>> On Sat, Mar 31, 2012 at 4:54 AM, Mark Nottingham wrote:
>>>>>>>
>>>>>>>> On 31/03/2012, at 1:11 PM, Mike Belshe wrote:
>>>>>>>>
>>>>>>>> For the record - nobody wants to avoid using port 80 for new
>>>>>>>>> protocols.  I'd love to!  There is no religious reason that we don't - its
>>>>>>>>> just that we know, for a fact, that we can't do it without subjecting a
>>>>>>>>> non-trivial number of users to hangs, data corruption, and other errors.
>>>>>>>>>   You might think its ok for someone else's browser to throw reliability out
>>>>>>>>> the window, but nobody at Microsoft, Google, or Mozilla has been willing to
>>>>>>>>> do that…
>>>>>>>>>
>>>>>>>> Mike -
>>>>>>>>
>>>>>>>> I don't disagree on any specific point (as I think you know), but I
>>>>>>>> would observe that the errors you're talking about can themselves be viewed
>>>>>>>> as transient. I.e., just because they occur in experiments now, doesn't
>>>>>>>> necessarily mean that they won't be fixed in the infrastructure in the
>>>>>>>> future -- especially if they generate a lot of support calls, because they
>>>>>>>> break a lot MORE things than they do now.
>>>>>>>>
>>>>>>>> Yes, there will be a period of pain, but I just wanted to highlight
>>>>>>>> one of the potential differences between deploying a standard and a
>>>>>>>> single-vendor effort.  It's true that we can't go too far here; if we
>>>>>>>> specify a protocol that breaks horribly 50% of the time, it won't get
>>>>>>>> traction. However, if we have a good base population and perhaps a good
>>>>>>>> fallback story, we *can* change things.
>>>>>>>>
>>>>>>> That's not our experience as browser vendors.  If browsers offer an
>>>>>>> HTTP/2.0 that has a bad user experience for 10% of users, then major
>>>>>>> sites (e.g., Twitter) won't adopt it.  They don't want to punish their
>>>>>>> users any more than we do.
>>>>>>>
>>>>>>> Worse, if they do adopt the new protocol, users who have trouble will
>>>>>>> try another browser (e.g., one that doesn't support HTTP/2.0 such as
>>>>>>> IE 9), observe that it works, and blame the first browser for being
>>>>>>> buggy.  The net result is that we lose a user and no pressure is
>>>>>>> exerted on the intermediaries who are causing the problem in the first
>>>>>>> place.
>>>>>>>
>>>>>>> These are powerful market force that can't really be ignored.
>>>>>>>
>>>>>>
>>>>>> So the takeway there is pay attention to the intermediary people when
>>>>>> they say something cant be implemented (or won't scale reasonably).
>>>>>>
>>>>>
>>>>> I agree we should pay attention to scalability - and we have.
>>>>>
>>>>> Please don't disregard that Google servers switched to SPDY with zero
>>>>> additional hardware (the google servers are fully conformant http/1.1
>>>>> proxies with a lot more DoS logic than the average site).  I know, some
>>>>> people think Google is some magical place where scalability defies physics
>>>>> and is not relevant, but this isn't true.  Google is just like every other
>>>>> site, except much much bigger.   If we had a 10% increase in server load
>>>>> with SPDY, Google never could have shipped it.  Seriously, who would roll
>>>>> out thousands of new machines for an experimental protocol?  Nobody.  How
>>>>> would we have convinced the executive team "this will be faster", if they
>>>>> were faced with some huge cap-ex bill?  Doesn't sound very convincing, does
>>>>> it?  In my mind, we have already proven clearly that SPDY scales just fine.
>>>>>
>>>>> But I'm open to other data.  So if you have a SPDY implementation and
>>>>> want to comment on the effects on your server, lets hear it!   And I'm not
>>>>> saying SPDY is free.  But, when you weigh costs (like compression and
>>>>> framing) against benefits (like 6x fewer connections),  there is no
>>>>> problem.  And could we make improvements still?  Of course.  But don't
>>>>> pretend that these are the critical parts of SPDY.  These are the mice nuts.
>>>>>
>>>>>
>>>>> For a forward proxy, there are several main reasons to even exist:
>>>>>
>>>>> a) implement and enforce access control policy
>>>>> b) audit usage
>>>>> c) cache
>>>>>
>>>>> you block any of these by bypassing everything with TLS, you have a
>>>>> non-starter for corporate environments.  Even if currently admins kinda
>>>>> turn a blind eye (because they have to) and allow port 443 through, as more
>>>>> and more traffic moves over to 443, more pressure will come down from
>>>>> management to control it.
>>>>>
>>>>> Best we don't get left with the only option being MITM.
>>>>>
>>>>
>>>> In my talk at the IETF, I proposed a solution to this.
>>>>
>>>> Browsers need to implement SSL to trusted proxies, which can do all of
>>>> the a/b/c that you suggested above.  This solution is better because the
>>>> proxy becomes explicit rather than implicit.  This means that the user
>>>> knows of it, and it IT guys knows of it.  If there are problems, it can be
>>>> configured out of the system.  Implicit proxies are only known the the IT
>>>> guy (maybe), and can't be configured out from a client.  The browser can be
>>>> made to honor HSTS so that end-to-end encryption is always enforced
>>>> appropriately.
>>>>
>>>> Further, proxies today already need this solution, even without SPDY.
>>>>   Traffic is moving to SSL already, albeit slowly, and corporate firewalls
>>>> can't see it today.  Corporate firewall admins are forced to do things like
>>>> block facebook entirely to prevent data leakage.  But, with this solution,
>>>> they could allow facebook access and still protect their IP.  (Or they
>>>> could block it if they wanted to, of course).
>>>>
>>>> Anyway, I do agree with you that we need better solutions so that we
>>>> don't incur more SSL MITM.  Many corporations are already looking for
>>>> expensive SSL MITM solutions (very complex to rollout due to key
>>>> management) because of the reasons I mention above, and its a technically
>>>> inferior solution.
>>>>
>>>> So lets do it!
>>>>
>>>>
>>>> I basically agree with all the above, however there is the ISP
>>>> intercepting proxy to think about.
>>>>
>>>> Many ISPs here in NZ have them, it's just a fact of life when you're
>>>> 150ms from the US and restricted bandwidth.  Pretty much all the big ISPs
>>>> have intercepting caching proxies.
>>>>
>>>> There's just no way to make these work... period...
>>>>
>>>> unless the ISP is to
>>>>
>>>> a) try and support all their customers to use an explicit proxy, or
>>>> b) get all their customers to install a root cert so they can do MITM.
>>>>
>>>
>>>
>>>> Maybe we need a better way to force a client to use a proxy, and take
>>>> the pain out of it for administration.  And do it securely (just
>>>> remembering why 305 was deprecated).
>>>>
>>>
>>>   Do proxy pacs or dhcp work for this?
>>>
>>> Note that we also need the browsers to honor HSTS end-to-end, even if we
>>> turn on "GET https://".
>>> Mike
>>>
>>>
>>>
>>>
>>>> Adrien
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Mike
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Adrien
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Mike
>>>>>
>>>>>
>>>>>
>>>>>> With plenty of bias, I agree.
>>>>>>
>>>>>> AYJ
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Received on Monday, 2 April 2012 21:37:01 UTC