W3C home > Mailing lists > Public > public-webappsec@w3.org > May 2014

Re: Remove paths from CSP?

From: Mike West <mkwst@google.com>
Date: Mon, 26 May 2014 17:13:11 +0200
Message-ID: <CAKXHy=dFvaFn_Hpszom_FbugKZMNmbsd=uXwwayVBdATErYh_Q@mail.gmail.com>
To: Sigbjørn Vik <sigbjorn@opera.com>
Cc: Daniel Veditz <dveditz@mozilla.com>, Joel Weinberger <jww@chromium.org>, "Oda, Terri" <terri.oda@intel.com>, Michal Zalewski <lcamtuf@coredump.cx>, Egor Homakov <homakov@gmail.com>, "public-webappsec@w3.org" <public-webappsec@w3.org>, "Eduardo' Vela" <evn@google.com>
Let's go one more round, Sigbjorn. :) You're right that I still disagree
with
your conclusions, but I do want to make sure that I've really understood the
suggestions you've offered and how they relate to the problems at hand.

I'm dropping the line-by-line in the interests of clarity. If I miss
something,
please do point it out.

These are the problems we're discussing, as I understand them:

1.  The current candidate recommendation (http://w3c.org/TR/CSP) makes it
    possible to read the origin of a cross-origin redirect's target (e.g.
    `google.com` to `accounts.google.com`) by examining violation reports.

2.  The current working draft of CSP 1.1 (http://w3c.org/TR/CSP11) improves
the
    attack in two ways:

    1. SecurityPolicyViolationEvent events allow in-page detection of
       violations.

    2. The path of a {same,cross}-origin redirect's target can be read (e.g.
       `socialnetwork.com/me` to `socialnetwork.com/mikewest`.

(Note: "read" in both cases means "brute-forced in cases where the redirects
don't target images, because images are already compromised in a number of
ways (see [3])")

The proposal I've made in [1] addresses only 2.2. By ignoring path
components
after a redirect, `img-src socialnetwork.com/me
socialnetwork.com/[brute-force]`
cannot be used to identify me, as `socialnetwork.com/mikewest` and
`socialnetwork.com/not-mikewest` will both match after `
socialnetwork.com/me`
returns a redirect response.

A previous change[2] addressed part of 1 by changing the violation reports
to
include only the URL that was initially loaded (e.g. before following
redirects). That is, the violation report would contain `forum.org`, rather
than
`sekrit.forum.org`).

I think that these together reduce the risks CSP imposes significantly. The
outstanding criticisms, as I understand them, are:

1.  Problem #1 remains unaddressed.

2.  The proposal confuses developers by allowing non-path-matching
resources to
    load after a redirect. This confusion might lead to accidental
weakening of
    a site's policy by adding a seemingly unrelated redirect.

Let's look at #1 first: it's certainly a problem, as it allows attackers a
perfect oracle for the logged-in status of users on those websites that
redirect
directly to a cross-origin login page in response to a resource request (or
have
a "are you logged in?" endpoint that redirects to a resource (as Google
shouldn't but does[3])). I remain unconvinced that this is a risk _unique_
to
CSP, as large sites anecdotally _always_ have such holes that are
detectable in
CSP's absence. That said, it would be good if we can find a way to make CSP
less
awesome than it is at enabling this detection.

Sigbjorn has offered two suggestions:

One suggestion for dealing with #1 is to remove the reporting functionality
entirely. I believe that step would make it more difficult to use CSP to
detect
a user's logged in status, but certainly not impossible. Even without the
very
difficult-to-solve problem of timing attacks, basic image requests still
leak
quite a bit of data (error events, `nativeHeight`, etc) about cross-origin
requests, and removing those leakages doesn't seem to be web-compatible.

More saliently, I don't want to remove reporting functionality for two
reasons:

1.  Based on anecdotal evidence, CSP is tough to get right. Without
reporting,
    I don't think internal Google properties would have turned it on yet,
even
    after the extensive internal testing they're doing. I'd be interested in
    hearing from other authors out there (Twitter? Facebook?), but (again,
    anecdotally) every author I've talked to outside Google started with a
    report-only policy and ramped up.

2.  Reporting provides herd immunity; even in a world where extensions are
    accidentally blocked by bad browser implementations (I know Blink has
some
    bugs), authors use reporting as an ongoing monitor of breaches and
potential
    injection vulnerabilities. At scale, CSP reports can point to the areas
of
    an application that are vulnerable, and point to structural issues you'd
    never otherwise detect. Twitter used CSP reports as impetus to convince
    themselves to move to HTTPS to avoid carrier-based injection, if I'm
    recalling correctly.

Assuming that we did follow this suggestion, however, we'd need to deal
with the
detectable bits of blockage that we don't explicitly communicate to the
server.
To address this issue, Sigbjorn suggested that we pretend to the DOM that
the
page loaded as it would have without CSP. First, calling this "not trivial
to
implement" is a massive understatement. :) Second, it removes CSP's ability
to
prevent data exfiltration (as we'd have to load the resources to grab image
dimensions and so on). I think I must be misrepresenting this suggestion,
because it seems unworkably hard to me.

For criticism #2: the question of complexity is, I believe, marginal. If a
developer can understand CSP well enough to implement it, she can grasp both
the impact of an impetus for the redirect exception. The suggestion that the
sheer number of developer means that any increase in complexity needs to be
carefully weighed is absolutely correct though, so what's the risk?

At worst, we end up in a world where path restrictions don't exist for
developers who inadvertently allow redirects into otherwise whitelisted
paths.
They still have origin-based protections of CSP 1.0. I see that as a
reasonable
baseline, though I would certainly appreciate any suggestions that didn't
involve that compromise.

That's how I see the current situation. I'd appreciate feedback from you,
Sigbjorn, as well as from the other folks who have weighed in on the thread.

[1]: https://github.com/w3c/webappsec/pull/18
[2]:
https://github.com/w3c/webappsec/commit/59cf28abbc1c3f9db7bc26e03ba783322d28e74f
[3]: http://www.tomanthony.co.uk/tools/detect-social-network-logins/

--
Mike West <mkwst@google.com>
Google+: https://mkw.st/+, Twitter: @mikewest, Cell: +49 162 10 255 91

Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Geschäftsführer: Graham Law, Christine Elizabeth Flores
(Sorry; I'm legally required to add this exciting detail to emails. Bleh.)


On Wed, May 21, 2014 at 5:25 PM, Sigbjørn Vik <sigbjorn@opera.com> wrote:

> On 21-May-14 17:02, Daniel Veditz wrote:
> > On 5/20/2014 7:18 AM, Sigbjørn Vik wrote:
> >> However, I do not think I will be able to convince you to support the
> >> alternative proposal of dropping error reporting instead, even if that
> >> from a security point of view is better.
> >
> > I'm not convinced error reporting is the problem, though--the fact that
> > it's blocked is. Can't you detect whether something got blocked through
> > onload/onerror entirely within the attack page?
>
> Correct. So the browser would have to pretend the page loaded (in the
> same way it would have done without CSP), regardless of whether it was
> blocked or not. This is not trivial to implement, it is essentially what
> the same origin policy does; avoid leaking information between unrelated
> origins.
>
> > That said, I'd almost be happy to consider dropping reporting because I
> > think the flood of false-positive reports people get when they use it
> > prevents people from deploying CSP.
>
> :) Error reporting does have its uses, and given that it already has
> implementations, it should be possible to make it available for users
> who are so inclined. For instance through an extension API.
>
> --
> Sigbjørn Vik
> Opera Software
>
Received on Monday, 26 May 2014 15:14:00 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:05 UTC