W3C home > Mailing lists > Public > public-webappsec@w3.org > June 2016

Re: Whitelisting external resources by hash (was Re: Finalizing the shape of CSP ‘unsafe-dynamic’)

From: Mike West <mkwst@google.com>
Date: Tue, 7 Jun 2016 21:59:17 +0200
Message-ID: <CAKXHy=eNh8rVVhcPmOWOE6e_kgzk5CLtJUC93tO0ejFt23SouA@mail.gmail.com>
To: Artur Janc <aaj@google.com>
Cc: Brad Hill <hillbrad@gmail.com>, Devdatta Akhawe <dev.akhawe@gmail.com>, WebAppSec WG <public-webappsec@w3.org>, Christoph Kerschbaumer <ckerschbaumer@mozilla.com>, Daniel Bates <dabates@apple.com>, Devdatta Akhawe <dev@dropbox.com>
On Tue, Jun 7, 2016 at 8:27 PM, Artur Janc <aaj@google.com> wrote:

> - You could whitelist specific URLs for script-src without risking
> redirect-based whitelist bypasses. For example `script-src 'self'
> ajax.googleapis.com/totally/safe.js` is an ineffective policy if there is
> an open redirect in 'self' due to the ability to load other scripts from
> ajax.googleapis.com caused by CSP's path-dropping behavior. A hash would
> avoid this problem.

I think you might have something in mind other than just hashing the URL?
It's not clear to me how a different spelling of the URL would mitigate the
issues that lead to the path-dropping-after-redirect behavior. Denying
redirects entirely, perhaps?

> - It would allow more flexibility in whitelisting exact script URLs. Using
> a traditional URL whitelist it's not possible to have a safe policy in an
> application which uses JSONP (script-src /api/jsonp can be abused by
> loading /api/jsonp?callback=evilFunction). With hashes you could allow
> SHA256("/api/jsonp?callback=goodFunction") but an attacker could not use
> such an interface to execute any other functions.

Is hashing important here? Would extending the source expression syntax to
include query strings be enough?

> - It would work with a policy based on 'unsafe-dynamic' / 'drop-whitelist'
> -- even if the host-source is dropped, the hash would offer a way to
> include specific external scripts.
> For CSP to become a useful XSS protection we will almost certainly have to
> move away from the whitelist-based model.

I think we agree that Google will certainly need to move away from the
whitelist-based model. Though I agree with you that a nonce-based model is
simpler to deploy for many sites, GitHub seems to be a reasonable
counter-example to general necessity.

> Dynamic applications can often use nonces instead, but for static content,
> or situations where using nonces would be difficult, I think hashes are a
> good solution -- one of their main benefits is that they're already in the
> spec and any expansion of their capabilities would be a relatively small
> change. (Another upside is that they can be used in a backwards-compatible
> way alongside a whitelist.)

I still don't understand why hashing a URL is useful. :(

Received on Tuesday, 7 June 2016 20:00:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:56 UTC