W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2014

Re: Remove paths from CSP?

From: Mike West <mkwst@google.com>
Date: Fri, 30 May 2014 16:51:19 +0200
Message-ID: <CAKXHy=d7XcF2UnuUxnpfSFiyEEJy2f8eMiH6JBocV19Y1VOeRw@mail.gmail.com>
To: Sigbjørn Vik <sigbjorn@opera.com>
Cc: Daniel Veditz <dveditz@mozilla.com>, Joel Weinberger <jww@chromium.org>, "Oda, Terri" <terri.oda@intel.com>, Michal Zalewski <lcamtuf@coredump.cx>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, "Eduardo' Vela" <evn@google.com>
Trimming, trimming, trimming. If I miss an important bit in my effort to
distill this down to the interesting stuff, please do point it out to me.

On Wed, May 28, 2014 at 5:30 PM, Sigbjørn Vik <sigbjorn@opera.com> wrote:

>
> -------- Original Message --------
> From: Sigbjørn Vik <sigbjorn@opera.com>
> Date: Tue, 20 May 2014 16:18:16 +0200
>
> E.g. forum.org automatically
> redirecting me to my most used forum, whether that be gay.forum.org,
> breast-cancer.forum.org or al-quaeda.forum.org. (Apologies for getting
> you all flagged in NSA's database.)
>

Got it. My suspicion would be that this kind of redirection is uncommon,
and happens significantly less often than login-related redirection, but I
have no data to back that up.

Regardless, the impact is more or less the same as login-detection: your
usage of a site is exposed.

> More saliently, if Tumblr indeed doesn't fall prey to this attack, then
> > it _also_ doesn't fall prey to the CSP-based variant. Both are
> > mechanisms of detecting a redirect; if the redirect doesn't happen, CSP
> > won't catch it either.
>
> You seem to be confusing image based redirects (what the author was
> looking for), with redirects in general (what is required for CSP).
>

My narrow point here is that any solution to the login detection mechanism
presented in this example would also prevent CSP from detecting the login
via the same entry point.

Are you suggesting that for site owners to protect themselves against
> the hole we are creating, all cross-domain 30x responses be exchanged
> for 403s? That would break a whole lot of sites, tools and use cases,
> and I think this is a non-starter. Feel free to modify my statement by
> adding a "practicably" in there though.
>

1. Replacing a login-requiring redirect with a 403 error page that contains
a login form with a cross-origin action seems like a perfectly reasonable
way of dealing with the issue of redirect detection.

2. If this claim is impractical, isn't the claim you're making in response
to image-based holes equally impractical? "Don't have login protection for
images, you probably don't need it." seems unlikely to work either.

I don't have a complaint :) You are saying "Reporting in CSP is good,
>  even though it introduces a security hole". I am saying: "Reporting can
> be made better, and without security holes. Reporting does not have to
> be tied to CSP, and there are non-CSP reporting use cases."
>

I'd like to understand how you envision reporting working in a way that
doesn't expose the hole you're pointing to. This sounds like a useful
avenue of exploration.

> How is that different/better than returning a network error, which is
> > what the spec currently asks for?
>
> It doesn't reveal to the page if the content was loaded or not, so no
> cross domain leakage, which is the current problem with the spec.
>

A lot of that claim depends on the details, I suppose. And all of it
depends on there being no side-channel leakage, which is unlikely to be the
case.


> So this is about a page where CSP has failed in protecting against
> script injection, and a custom script is now running, and has gathered
> secret data.


Or one of the many scriptless techniques outlined in papers like
http://lcamtuf.coredump.cx/postxss/


> In this case, protection against loading cross domain
> inline resources is meant to stop the script from sending the secret
> data home? I don't think that would be very difficult for such a script,
> here are a few random first things I'd try.
> <snip>


Yup, many of these would be effective today (though significantly more
user-visible than an image or XHR); we're evaluating control over popups
and navigations for 1.2, but limiting either kinda worries me, as the cure
could be worse than the disease.


> Browsers are not designed to resist a site wanting to leak data, and I
> don't think they are able to. Claiming that CSP protects against this
> sounds like a false promise to me.
>

CSP mitigates these risks, it does not prevent them. I hope I didn't claim
that it did, only that blocking requests is more effective than not.

> I don't understand this conclusion. Again, `script-src example.com
>
> <http://example.com>` is still significantly better than nothing. Script
> > can't load from `evil.com <http://evil.com>`, for instance. That seems
> > like a significant improvement.
>
> Webmasters have only so much time on hand for security. (Normally not
> nearly enough.) If they use their time on magic which doesn't work, they
> are worse off than if they had used their time on something which does
> work. If they spend time on implementing path restrictions, and it
> doesn't work, they are worse off than if they hadn't. In addition, they
> might actually think they are safe, making it even worse.
>

The harm you're positing is that a developer will spend tons of time
implementing CSP, only to see all that value crumble away when her partner
adds an ill-advised open redirect in a directory she didn't expect. Two
thoughts:

* She's significantly better off for having constructed a policy: `evil.com`
still won't load, which is great! The potential impact is limited to
scripts that share an origin she's whitelisted, which isn't nothing.

* If the developer doesn't expect to load redirected resources, perhaps we
could add an 'unsafe-redirect' (or similar) source expression which she'd
have to add to her policy in order to allow any redirection in the first
place. That would further mitigate the risk posed by accidental
introduction of otherwise whitelisted redirects.

-------- Original Message --------
> From: Sigbjørn Vik <sigbjorn@opera.com>
> Date: Thu, 13 Feb 2014 09:50:48 +0100
> Phishing is bigger problem than XSS, according to experts[1][2][3].
> [...]
> [1]
>
> http://www.scmagazine.com/phishing-remains-most-reliable-cyber-fraud-mechanism/article/248998/
>

The examples presented here ("Users can be tricked by emails masquerading
as warning notices from trusted brands", and "A user may receive a warning
allegedly from PayPal") wouldn't be helped by CSP.


> [1] http://www.proofpoint.com/uk/topten/index-roi.php
>

This is really hard to read. :) I think it's telling me that the #1 threat
is spear phishing, with the example of AP journalists being targeted after
individual research on social media sites. How does CSP help here? I assume
that finding the journalists on social media sites was the critical factor
in knowing that they were probably logged into those sites?

[2]
>
> http://www.invincea.com/wp-content/uploads/Invincea-spear-phishing-watering-hole-drive-by-whitepaper-5.17.13.pdf
>

Spear phishing again. And short of buying everyone on the internet Invincea
firewalls, I'm not sure what CSP can do to help attackers.


>  If I use
>
targeted phishing, I might want to know as much as possible about the
> target first, for which CSP is a great aid.
>

I find it hard to come up with a scenario in which a targeted phishing
attack would use CSP to determine that I'm logged into Twitter, rather than
just looking for me on Twitter. That's a strawman, but when you get to the
smaller sites, I think it's even more salient. When RSAConf's site was
hacked via Codeco, the attacker targeted Codeco employees, who could be
reliably assumed to be logged in at work. Anecdotally, that seems to me to
be the threat we're worried about.


> CSP aids phishing even more in untargeted phishes, random URLs left in
> blog comments, shady ads (even on high profile sites), links from shady
> sites or similar. If I know you are logged in to a newspaper, presenting
> the "You need to log in to read this article" login page might be all it
> takes. Doing that randomly for random users would be extremely unlikely
> to work.
>

Alright, this is totally valid. I'm much less worried about this than about
spear phishing, however, as I'd claim that untargeted spam is the type of
phishing attack most likely to be shut down by SafeBrowsing and similar
services. They totally aren't perfect, but _this_ type of phishing is what
they're good at. Since they're terrible at spear phishing, I'm more
concerned about those.

That, to me, significantly mitigates the risk that CSP-based data-gathering
poses to these users.

-mike
Received on Friday, 30 May 2014 14:52:07 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:05 UTC