Re: Remove paths from CSP?

On 12-Feb-14 14:19, Mike West wrote:
> On Wed, Feb 12, 2014 at 2:02 PM, Sigbjørn Vik <sigbjorn@opera.com
> <mailto:sigbjorn@opera.com>> wrote:
> 
>     Well, right there is a problem. If a page author can tell if a
>     redirected-to third party resource fits a regexp or not, he can tell
>     redirection chains of that third party resources. This is opening up a
>     new hole in the same domain policy.
> 
> There is no regex matching capability in CSP. Path leakage is a
> brute-force attack. Otherwise this is an accurate description of the
> problem I'd like to address. 

Apologies for sloppy wording. I simply meant some string matching
capability.

> Given the prevalence of open redirects, I don't think we can reasonably
> allow all redirects to bypass a page's CSP. Perhaps something like
> Egor's suggestion to allow redirects for specific paths is a good
> compromise, which limits the potential leakage to cross-origin redirects
> on an origin (rather than path) basis.

Origin basis is already bad enough.

>     > An issue is that leakage is inherent in the functionality: it isn't
>     > possible to block an image from loading, for example, without
>     making the
>     > fact that the image was blocked visible to the page that loaded it.
> 
>     That should be fairly easy. Even if blocked, call onload, and return the
>     image dimensions to the page. That is all a page can detect anyway.
> 
> How do we know the original image dimensions if we block the load?

Don't block the load, only the page interaction.

>     If CSP is to be at all valuable, it needs to avoid opening new security
>     holes, not trade one security hole for another.
> 
> I agree with the sentiment.
> 
>     Especially not if the former is avoidable by good web site security
>     practices, while the latter is not.
> 
> Theoretically, you're right: preventing content injection attacks is an
> absolutely trivial problem. Practically, we're really bad at it,
> collectively. If we were good at it, CSP would be fairly pointless.
> Since we're not, I (with admitted bias) think it's quite valuable.

The issue here is that you are solving a problem for the many sloppy
developers, by creating a problem for the good ones, thus preventing
great security (including in the future), even if possibly improving on
the current average security.

>     Block disallowed resources from interacting with the page, but pretend
>     to the page that they were loaded.
> 
> Pretending is difficult, particularly given that we, in the best case,
> want to avoid making requests for blocked resources.

This is a new problem description. I agree that this would be ideal, but
the importance of it far less than the original goal, and in my opinion,
far less than the problems it introduces.

> Moreover, perfectly pretending that we loaded the resource (e.g.
> replicating all the side-effects of loading a resource, including layout
> changes, script execution, etc) is indistinguishable from not blocking
> the resource in the first place. If avoid some of the side-effects (e.g.
> we don't want to execute blocked script), there will be detectable
> differences.

Yes, there will be detectable differences. One might use timing data to
tell that a script did not execute, and thus must have been blocked. On
some web sites, this may leak logged-in information. However, this is
preventable if the web site really cares. It can also not be abused to
reveal details of intranet domain names, local proxy settings, browser
overrides/extensions etc. What it does reveal will in most cases be
indistinguishable from other, unrelated failures, and will at best be
heuristics based. While not perfect, it is far better than the alternative.

-- 
Sigbjørn Vik
Opera Software

Received on Wednesday, 12 February 2014 13:51:53 UTC