W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re[2]: breaking TLS (Was: Re: multiplexing -- don't do it)

From: Adrien W. de Croy <adrien@qbik.com>
Date: Tue, 03 Apr 2012 00:27:05 +0000
To: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>, William Chan (陈智昌) <willchan@chromium.org>
Cc: "Mike Belshe" <mike@belshe.com>, "Peter Lepeska" <bizzbyster@gmail.com>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
Message-Id: <em52256b33-6145-4798-85d9-13520e523462@BOMBED>

------ Original Message ------
From: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
>
>
>On 04/03/2012 12:47 AM, William Chan (陈智昌) wrote: 
>
>>>You really mean "prevent" there? POSTing a rot13 version of the 
>>>corporate secret won't work? And I thought more anti-porn policies 
>>>were domain name and not content based. 
>>>
>>
>>I don't mean _completely_ prevent. But help stop the 9X% case? Yeah, 
>>I 
>>think that's what they're shooting for. I'm not well versed in the 
>>intricacies of IT policies using these SSL MITM proxies 
>
>Me neither. That's why I asked. But I'd like to know not 
>just about the policy they want to (or pay to) enforce, 
>but rather also about the effectiveness of their attempts 
>at enforcement. 
  
That's more of a political issue.  Same with police and laws.  
Something doesn't need to be 100% effective in order to be pursued.
  
But the malware case is definitely an issue, if the entire web goes to 
SSL, then 100% of web-borne malware will be over SSL.
  
Browser hijacking etc will not go away with this either.
  
I really can't see the entire web going SSL/TLS though.  Maybe we need 
to bang heads a bit more on this issue :)
  
>
>
>S 
>
>> (I suspect someone 
>>else on the mailing list is), but I know there are schools and what 
>>not 
>>which want to filter based on content, since schools want this from 
>>Google 
>>(see 
>>http://support.google.com/websearch/bin/answer.py?hl=en&answer=173733). 
>>
>>
>>
>>>If there's published evidence of the effectiveness of that kind of 
>>>thing I've not seen it, but I didn't go looking. Saying its obvious 
>>>doesn't help me at least. 
>>>
>>>I can more easily envisage spotting malware on the inbound side 
>>>as maybe effective but don't know how much of that is coming 
>>>via TLS. 
>>>
>>>Really, I'm asking for evidence here, not just trying to score 
>>>points. 
>>>
>>>S. 
>>>
>>>
>>>>>There is plenty of evidence that people sell this kind of thing 
>>>>>and that 
>>>>>people use this kind of thing, but if its trivially defeated we're 
>>>>>into 
>>>>>security theatre. 
>>>>>
>>>>
>>>>
>>>>I'm not sure what you mean by "trivially defeated", but if you mean 
>>>>whether 
>>>>or not SSL is subverted by these MITM boxes, then it's obvious that 
>>>>this 
>>>>is 
>>>>already the case. See 
>>>>https://www.corelan.be/index.**php/2012/03/14/blackhat-eu-**2012-day-1/<https://www.corelan.be/index.php/2012/03/14/blackhat-eu-2012-day-1/>for 
>>>>
>>>>example. The SSL MITM proxy did not do any certificate validation. 
>>>>
>>>>
>>>>  If that were the case, then maybe e2e security isn't worth giving 
>>>>>up for so little. 
>>>>>
>>>>>
>>>>>  I don't disagree with your concerns, of course. This work needs 
>>>>>a lot 
>>>>>of 
>>>>>
>>>>>>scrutiny. But I do think we can improve overall SSL adoption if 
>>>>>>we 
>>>>>>recognize what people are going to do without these types of 
>>>>>>features. 
>>>>>>
>>>>>>
>>>>>The problem is, scrutiny will not enable us to square a circle, 
>>>>>which 
>>>>>may be what's involved here. 
>>>>>
>>>>>S 
>>>>>
>>>>>
>>>>>
>>>>>  Mike 
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>Stephen. 
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>  I do understand that there are percieved-real requirements 
>>>>>>>here for 
>>>>>>>>>enterprise middleboxes to snoop but we've not gotten IETF 
>>>>>>>>>consensus 
>>>>>>>>>to 
>>>>>>>>>support that kind of feature in our protocols. 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  sure we do. 2616 already contemplates and supports proxies - 
>>>>>>>>>just 
>>>>>>>>not 
>>>>>>>>with https. 
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>Stephen. 
>>>>>>>>>PS: I'm not angling for better http auth here. Even if we get 
>>>>>>>>>that 
>>>>>>>>>there will be many passwords and other re-usable credentials 
>>>>>>>>>in use 
>>>>>>>>>for pretty much ever and the argument against breaking TLS 
>>>>>>>>>will 
>>>>>>>>>remain. 
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  Auth in fact may be the answer to the issue of trust for a 
>>>>>>>>>server to 
>>>>>>>>place in a proxy (re the client cert issue). 
>>>>>>>>
>>>>>>>>There may not be a good answer for client certs, and it may be 
>>>>>>>>the 
>>>>>>>>only 
>>>>>>>>way to support them is to continue to tunnel. 
>>>>>>>>
>>>>>>>>At least they are not that prevalent so that in cases where 
>>>>>>>>they are 
>>>>>>>>required, tunneling can be allowed. 
>>>>>>>>
>>>>>>>>Adrien 
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  Mike 
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>  The proxy can still not see the facebook traffic in the 
>>>>>>>>>>clear so 
>>>>>>>>>>the 
>>>>>>>>>>
>>>>>>>>>>>admin will still either need to block facebook entirely or 
>>>>>>>>>>>do a 
>>>>>>>>>>>MITM. 
>>>>>>>>>>>Peter 
>>>>>>>>>>>On Mon, Apr 2, 2012 at 5:11 PM, Mike Belshe<mike@belshe.com> 
>>>>>>>>>>>wrote: 
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>On Mon, Apr 2, 2012 at 2:08 PM, Adrien W. de 
>>>>>>>>>>>>Croy<adrien@qbik.com>wrote: 
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  ------ Original Message ------ From: "Mike 
>>>>>>>>>>>>>Belshe"<mike@belshe.com> To: "Adrien W. de Croy"< 
>>>>>>>>>>>>>adrien@qbik.com> 
>>>>>>>>>>>>>Cc: "Amos Jeffries"<squid3@treenet.co.nz******>;" 
>>>>>>>>>>>>>ietf-http-wg@w3.org 
>>>>>>>>>>>>>
>>>>>>>>>>>>>"< 
>>>>>>>>>>>>>
>>>>>>>>>>>>>ietf-http-wg@w3.org> Sent: 3/04/2012 8:52:22 a.m. Subject: 
>>>>>>>>>>>>>Re: 
>>>>>>>>>>>>>multiplexing -- don't do it 
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>On Mon, Apr 2, 2012 at 1:43 PM, Adrien W. de Croy< 
>>>>>>>>>>>>><adrien@qbik.com> adrien@qbik.com> wrote: 
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>  ------ Original Message ------ From: "Mike 
>>>>>>>>>>>>>>Belshe"<mike@belshe.com>mike@******belshe.com<mike@belshe.com> 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>On Mon, Apr 2, 2012 at 6:57 AM, Amos Jeffries< 
>>>>>>>>>>>>>><squid3@treenet.co.nz><squid3@******treenet.co.nz<squid3@** 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>treenet.co.nz<squid3@treenet.**co.nz<squid3@treenet.co.nz>>> 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>   squid3@treenet.co.nz> wrote: 
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  On 1/04/2012 5:17 a.m., Adam Barth wrote: 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>On Sat, Mar 31, 2012 at 4:54 AM, Mark Nottingham wrote: 
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>   On 31/03/2012, at 1:11 PM, Mike Belshe wrote: 
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>  For the record - nobody wants to avoid using port 80 
>>>>>>>>>>>>>>>>for new 
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>  protocols. I'd love to! There is no religious reason 
>>>>>>>>>>>>>>>>>that we 
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>don't - its just that we know, for a fact, that we 
>>>>>>>>>>>>>>>>>>can't do 
>>>>>>>>>>>>>>>>>>it without subjecting a non-trivial number of users 
>>>>>>>>>>>>>>>>>>to 
>>>>>>>>>>>>>>>>>>hangs, 
>>>>>>>>>>>>>>>>>>data corruption, and other errors. You might think 
>>>>>>>>>>>>>>>>>>its ok 
>>>>>>>>>>>>>>>>>>for 
>>>>>>>>>>>>>>>>>>someone else's browser to throw reliability out the 
>>>>>>>>>>>>>>>>>>window, 
>>>>>>>>>>>>>>>>>>but nobody at Microsoft, Google, or Mozilla has been 
>>>>>>>>>>>>>>>>>>willing 
>>>>>>>>>>>>>>>>>>to do that… 
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>  Mike - 
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>I don't disagree on any specific point (as I think you 
>>>>>>>>>>>>>>>>>know), 
>>>>>>>>>>>>>>>>>but I would observe that the errors you're talking 
>>>>>>>>>>>>>>>>>about can 
>>>>>>>>>>>>>>>>>themselves be viewed as transient. I.e., just because 
>>>>>>>>>>>>>>>>>they 
>>>>>>>>>>>>>>>>>occur in experiments now, doesn't necessarily mean 
>>>>>>>>>>>>>>>>>that they 
>>>>>>>>>>>>>>>>>won't be fixed in the infrastructure in the future -- 
>>>>>>>>>>>>>>>>>especially if they generate a lot of support calls, 
>>>>>>>>>>>>>>>>>because 
>>>>>>>>>>>>>>>>>they break a lot MORE things than they do now. 
>>>>>>>>>>>>>>>>>Yes, there will be a period of pain, but I just wanted 
>>>>>>>>>>>>>>>>>to 
>>>>>>>>>>>>>>>>>highlight one of the potential differences between 
>>>>>>>>>>>>>>>>>deploying 
>>>>>>>>>>>>>>>>>a 
>>>>>>>>>>>>>>>>>standard and a single-vendor effort. It's true that we 
>>>>>>>>>>>>>>>>>can't 
>>>>>>>>>>>>>>>>>go too far here; if we specify a protocol that breaks 
>>>>>>>>>>>>>>>>>horribly 
>>>>>>>>>>>>>>>>>50% of the time, it won't get traction. However, if we 
>>>>>>>>>>>>>>>>>have a 
>>>>>>>>>>>>>>>>>good base population and perhaps a good fallback 
>>>>>>>>>>>>>>>>>story, we 
>>>>>>>>>>>>>>>>>*can* change things. 
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>  That's not our experience as browser vendors. If 
>>>>>>>>>>>>>>>>>browsers 
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>offer 
>>>>>>>>>>>>>>>>an HTTP/2.0 that has a bad user experience for 10% of 
>>>>>>>>>>>>>>>>users, 
>>>>>>>>>>>>>>>>then major sites (e.g., Twitter) won't adopt it. They 
>>>>>>>>>>>>>>>>don't 
>>>>>>>>>>>>>>>>want to punish their users any more than we do. 
>>>>>>>>>>>>>>>>Worse, if they do adopt the new protocol, users who 
>>>>>>>>>>>>>>>>have 
>>>>>>>>>>>>>>>>trouble will try another browser (e.g., one that 
>>>>>>>>>>>>>>>>doesn't 
>>>>>>>>>>>>>>>>support HTTP/2.0 such as IE 9), observe that it works, 
>>>>>>>>>>>>>>>>and 
>>>>>>>>>>>>>>>>blame the first browser for being buggy. The net result 
>>>>>>>>>>>>>>>>is 
>>>>>>>>>>>>>>>>that 
>>>>>>>>>>>>>>>>we lose a user and no pressure is exerted on the 
>>>>>>>>>>>>>>>>intermediaries 
>>>>>>>>>>>>>>>>who are causing the problem in the first place. 
>>>>>>>>>>>>>>>>These are powerful market force that can't really be 
>>>>>>>>>>>>>>>>ignored. 
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>  So the takeway there is pay attention to the 
>>>>>>>>>>>>>>>>intermediary 
>>>>>>>>>>>>>>>people 
>>>>>>>>>>>>>>>when they say something cant be implemented (or won't 
>>>>>>>>>>>>>>>scale 
>>>>>>>>>>>>>>>reasonably). 
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>  I agree we should pay attention to scalability - and 
>>>>>>>>>>>>>>>we have. 
>>>>>>>>>>>>>>Please don't disregard that Google servers switched to 
>>>>>>>>>>>>>>SPDY with 
>>>>>>>>>>>>>>zero additional hardware (the google servers are fully 
>>>>>>>>>>>>>>conformant 
>>>>>>>>>>>>>>http/1.1 proxies with a lot more DoS logic than the 
>>>>>>>>>>>>>>average 
>>>>>>>>>>>>>>site). I know, some people think Google is some magical 
>>>>>>>>>>>>>>place 
>>>>>>>>>>>>>>where scalability defies physics and is not relevant, but 
>>>>>>>>>>>>>>this 
>>>>>>>>>>>>>>isn't true. Google is just like every other site, except 
>>>>>>>>>>>>>>much 
>>>>>>>>>>>>>>much bigger. If we had a 10% increase in server load with 
>>>>>>>>>>>>>>SPDY, 
>>>>>>>>>>>>>>Google never could have shipped it. Seriously, who would 
>>>>>>>>>>>>>>roll 
>>>>>>>>>>>>>>out 
>>>>>>>>>>>>>>thousands of new machines for an experimental protocol? 
>>>>>>>>>>>>>>Nobody. 
>>>>>>>>>>>>>>How would we have convinced the executive team "this will 
>>>>>>>>>>>>>>be 
>>>>>>>>>>>>>>faster", if they were faced with some huge cap-ex bill? 
>>>>>>>>>>>>>>Doesn't 
>>>>>>>>>>>>>>sound very convincing, does it? In my mind, we have 
>>>>>>>>>>>>>>already 
>>>>>>>>>>>>>>proven clearly that SPDY scales just fine. 
>>>>>>>>>>>>>>But I'm open to other data. So if you have a SPDY 
>>>>>>>>>>>>>>implementation 
>>>>>>>>>>>>>>and want to comment on the effects on your server, lets 
>>>>>>>>>>>>>>hear it! 
>>>>>>>>>>>>>>And I'm not saying SPDY is free. But, when you weigh 
>>>>>>>>>>>>>>costs (like 
>>>>>>>>>>>>>>compression and framing) against benefits (like 6x fewer 
>>>>>>>>>>>>>>connections), there is no problem. And could we make 
>>>>>>>>>>>>>>improvements 
>>>>>>>>>>>>>>still? Of course. But don't pretend that these are the 
>>>>>>>>>>>>>>critical 
>>>>>>>>>>>>>>parts of SPDY. These are the mice nuts. 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>For a forward proxy, there are several main reasons to 
>>>>>>>>>>>>>>even 
>>>>>>>>>>>>>>exist: 
>>>>>>>>>>>>>>a) implement and enforce access control policy b) audit 
>>>>>>>>>>>>>>usage c) 
>>>>>>>>>>>>>>cache 
>>>>>>>>>>>>>>you block any of these by bypassing everything with TLS, 
>>>>>>>>>>>>>>you 
>>>>>>>>>>>>>>have 
>>>>>>>>>>>>>>a non-starter for corporate environments. Even if 
>>>>>>>>>>>>>>currently 
>>>>>>>>>>>>>>admins kinda turn a blind eye (because they have to) and 
>>>>>>>>>>>>>>allow 
>>>>>>>>>>>>>>port 443 through, as more and more traffic moves over to 
>>>>>>>>>>>>>>443, 
>>>>>>>>>>>>>>more pressure will come down from management to control 
>>>>>>>>>>>>>>it. 
>>>>>>>>>>>>>>Best we don't get left with the only option being MITM. 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  In my talk at the IETF, I proposed a solution to this. 
>>>>>>>>>>>>>Browsers need to implement SSL to trusted proxies, which 
>>>>>>>>>>>>>can do 
>>>>>>>>>>>>>all of the a/b/c that you suggested above. This solution 
>>>>>>>>>>>>>is 
>>>>>>>>>>>>>better 
>>>>>>>>>>>>>because the proxy becomes explicit rather than implicit. 
>>>>>>>>>>>>>This 
>>>>>>>>>>>>>means that the user knows of it, and it IT guys knows of 
>>>>>>>>>>>>>it. If 
>>>>>>>>>>>>>there are problems, it can be configured out of the 
>>>>>>>>>>>>>system. 
>>>>>>>>>>>>>Implicit proxies are only known the the IT guy (maybe), 
>>>>>>>>>>>>>and can't 
>>>>>>>>>>>>>be configured out from a client. The browser can be made 
>>>>>>>>>>>>>to honor 
>>>>>>>>>>>>>HSTS so that end-to-end encryption is always enforced 
>>>>>>>>>>>>>appropriately. 
>>>>>>>>>>>>>Further, proxies today already need this solution, even 
>>>>>>>>>>>>>without 
>>>>>>>>>>>>>SPDY. Traffic is moving to SSL already, albeit slowly, and 
>>>>>>>>>>>>>corporate firewalls can't see it today. Corporate firewall 
>>>>>>>>>>>>>admins 
>>>>>>>>>>>>>are forced to do things like block facebook entirely to 
>>>>>>>>>>>>>prevent 
>>>>>>>>>>>>>data leakage. But, with this solution, they could allow 
>>>>>>>>>>>>>facebook 
>>>>>>>>>>>>>access and still protect their IP. (Or they could block it 
>>>>>>>>>>>>>if 
>>>>>>>>>>>>>they 
>>>>>>>>>>>>>wanted to, of course). 
>>>>>>>>>>>>>Anyway, I do agree with you that we need better solutions 
>>>>>>>>>>>>>so that 
>>>>>>>>>>>>>we don't incur more SSL MITM. Many corporations are 
>>>>>>>>>>>>>already 
>>>>>>>>>>>>>looking for expensive SSL MITM solutions (very complex to 
>>>>>>>>>>>>>rollout 
>>>>>>>>>>>>>due to key management) because of the reasons I mention 
>>>>>>>>>>>>>above, 
>>>>>>>>>>>>>and 
>>>>>>>>>>>>>its a technically inferior solution. 
>>>>>>>>>>>>>So lets do it! 
>>>>>>>>>>>>>
>>>>>>>>>>>>>I basically agree with all the above, however there is the 
>>>>>>>>>>>>>ISP 
>>>>>>>>>>>>>intercepting proxy to think about. 
>>>>>>>>>>>>>Many ISPs here in NZ have them, it's just a fact of life 
>>>>>>>>>>>>>when 
>>>>>>>>>>>>>you're 150ms from the US and restricted bandwidth. Pretty 
>>>>>>>>>>>>>much 
>>>>>>>>>>>>>all 
>>>>>>>>>>>>>the big ISPs have intercepting caching proxies. 
>>>>>>>>>>>>>There's just no way to make these work... period... 
>>>>>>>>>>>>>unless the ISP is to 
>>>>>>>>>>>>>a) try and support all their customers to use an explicit 
>>>>>>>>>>>>>proxy, 
>>>>>>>>>>>>>or b) get all their customers to install a root cert so 
>>>>>>>>>>>>>they can 
>>>>>>>>>>>>>do MITM. 
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>  Maybe we need a better way to force a client to use a 
>>>>>>>>>>>>proxy, and 
>>>>>>>>>>>>
>>>>>>>>>>>>  take the pain out of it for administration. And do it 
>>>>>>>>>>>>securely 
>>>>>>>>>>>>>(just remembering why 305 was deprecated). 
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>  Do proxy pacs or dhcp work for this? 
>>>>>>>>>>>>Note that we also need the browsers to honor HSTS 
>>>>>>>>>>>>end-to-end, even 
>>>>>>>>>>>>if we turn on "GET https://". Mike 
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>  Adrien 
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>Mike 
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>  Adrien 
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>Mike 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  With plenty of bias, I agree. 
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  AYJ 
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>>
>>
Received on Tuesday, 3 April 2012 00:27:37 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT