- From: Mike West <mkwst@google.com>
- Date: Mon, 2 Jun 2014 15:04:48 +0200
- To: Sigbjørn Vik <sigbjorn@opera.com>
- Cc: Daniel Veditz <dveditz@mozilla.com>, Joel Weinberger <jww@chromium.org>, "Oda, Terri" <terri.oda@intel.com>, Michal Zalewski <lcamtuf@coredump.cx>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, "Eduardo' Vela" <evn@google.com>
- Message-ID: <CAKXHy=cqLZ6iFMPac7771Q_6W3-BZR_QCknfqa6CrAdTfVoeZw@mail.gmail.com>
On Mon, Jun 2, 2014 at 10:03 AM, Sigbjørn Vik <sigbjorn@opera.com> wrote: > On 30-May-14 16:51, Mike West wrote: > > > My narrow point here is that any solution to the login detection > > mechanism presented in this example would also prevent CSP from > > detecting the login via the same entry point. > > Which is what I am trying to show you is incorrect :) The author was > limited to using image based redirects, because he couldn't use other > redirects reliably. CSP can. > We're violently agreeing. "same entry point" was meant to express that limitation. :) > > 1. Replacing a login-requiring redirect with a 403 error page that > > contains a login form with a cross-origin action seems like a perfectly > > reasonable way of dealing with the issue of redirect detection. > > That would break every inline that is served via a redirect. > I guess I'm wondering how pervasive that is. I haven't looked for this on any website at all, but my suspicion is that the majority of subresource loads are loaded directly. Still, agreed that this would break some set of use cases. > There are many ways of doing that. For this case: > Webmasters want to know if an inline they expected to work was blocked > for some reason. We do not want to leak third party information. > One simple solution is some kind of whitelisting on the cross domain > servers, that they may be reported to the master domain. > Another solution is to create tools for webmasters to easily see which > same domain inlines have been loaded, and which inlines have ended up > requesting cross domain resources. > I have not spent much time on coming up with an alternate reporting > scheme, but there are many possibilities. > We should discuss this in more detail on a separate thread. The insight that site authors probably _really_ only care about aggregated reports is a valuable one. I think the WG discussed sampling at some point in the past (e.g. not promising to deliver every violation report, but pushing X% of them) but I can't find the thread. Perhaps it's worth revisiting that topic. > > A lot of that claim depends on the details, I suppose. And all of it > > depends on there being no side-channel leakage, which is unlikely to be > > the case. > > What new forms of side channel leakage do you foresee? > Nothing new, just the same old wonderfulness. Timing attacks based on dealing with a blank resource vs a real resource, certainly (filters probably have significantly different performance characteristics on a blank image, for instance). > > CSP mitigates these risks, it does not prevent them. I hope I didn't > > claim that it did, only that blocking requests is more effective than > not. > > Very marginally, about as much as a fence with a big gap in it is a > security improvement over no fence. ;) Hrm. This seems overly dismissive. For example, pushing bad actors towards visible exfiltration is a good thing. If we can force exfiltration to be visible (by requiring full page navigations, for instance), then I think we're better off than we are now. > I do not see data exfiltration protection as an argument for CSP. > I'd prefer not to give up on the idea entirely. > Agreed, the time she has taken to implement CSP might be well worth it. > My point is about the time she has taken to implement path restrictions, > it might have been a complete waste. > If you agree that CSP is potentially valuable on face, then I'm unconvinced that the incremental effort of implementing path-based restrictions will be arduous for the majority of developers. Wildly guessing, I'd imagine that appending some variant of `/js` or `/script` to every `script-src` directive's source expression would solve 80% of the web's use case. > > * If the developer doesn't expect to load redirected resources, perhaps > > we could add an 'unsafe-redirect' (or similar) source expression which > > she'd have to add to her policy in order to allow any redirection in the > > first place. That would further mitigate the risk posed by accidental > > introduction of otherwise whitelisted redirects. > > CSP protects the origin, not the target. This will not help against > redirection detection on the target. (Why would an attacker limit > himself by using unsafe-redirect?) > Sorry, the thought wasn't at all clear from that sentence. 'unsafe-redirect' wouldn't solve the attack you're concerned with. It would, however, potentially allow us to change the proposal I've made such that CSP blocks all redirects by default, allowing any redirect only if the keyword is present. That suggestion is meant to address your concerns regarding the holes opened up by allowing redirects in certain cases, as developers who know they don't want to load scripts resulting from a redirect would have an option of avoiding the path-based confusion that you're pointing to. > My point was simply to show that phishing is considered a much worse > threat than XSS on the net. Spear phishing might be the biggest part of > this, but phishing is still how most people get exploited, not XSS. I am > open to changing my mind if you can find statistics showing me otherwise. > I'll look for solid statistics. If I don't find any, I'll argue that your sources are strange and not representative and that my anecdotes are obviously more believable than yours. If I do find some, I'll argue that my data is sincerely more believable than yours. :) > A malicious ad on a website will not be blocked by SafeBrowsing until it > has been up for some time, opening up for users to be exploited in the > meanwhile. Malicious ads prove too much. They can already just send you to ` malware-r-us.com/nyt` <http://malware-r-us.com/nyt> from `nyt.com`. CSP-based sniffing is simply not going to be the way they phish users. > And I do not believe us opening up a new security hole on the > internet is ok just because services like SafeBrowsing most likely will > be able to deal with it after some time. > SafeBrowsing isn't a panacea, granted. It does reduce the risk, but certainly doesn't negate it. -mike
Received on Monday, 2 June 2014 13:05:40 UTC