On Tue, Jun 7, 2016 at 8:27 PM, Artur Janc <aaj@google.com> wrote:
> - You could whitelist specific URLs for script-src without risking
> redirect-based whitelist bypasses. For example `script-src 'self'
> ajax.googleapis.com/totally/safe.js` is an ineffective policy if there is
> an open redirect in 'self' due to the ability to load other scripts from
> ajax.googleapis.com caused by CSP's path-dropping behavior. A hash would
> avoid this problem.
>
I think you might have something in mind other than just hashing the URL?
It's not clear to me how a different spelling of the URL would mitigate the
issues that lead to the path-dropping-after-redirect behavior. Denying
redirects entirely, perhaps?
> - It would allow more flexibility in whitelisting exact script URLs. Using
> a traditional URL whitelist it's not possible to have a safe policy in an
> application which uses JSONP (script-src /api/jsonp can be abused by
> loading /api/jsonp?callback=evilFunction). With hashes you could allow
> SHA256("/api/jsonp?callback=goodFunction") but an attacker could not use
> such an interface to execute any other functions.
>
Is hashing important here? Would extending the source expression syntax to
include query strings be enough?
> - It would work with a policy based on 'unsafe-dynamic' / 'drop-whitelist'
> -- even if the host-source is dropped, the hash would offer a way to
> include specific external scripts.
>
> For CSP to become a useful XSS protection we will almost certainly have to
> move away from the whitelist-based model.
>
I think we agree that Google will certainly need to move away from the
whitelist-based model. Though I agree with you that a nonce-based model is
simpler to deploy for many sites, GitHub seems to be a reasonable
counter-example to general necessity.
> Dynamic applications can often use nonces instead, but for static content,
> or situations where using nonces would be difficult, I think hashes are a
> good solution -- one of their main benefits is that they're already in the
> spec and any expansion of their capabilities would be a relatively small
> change. (Another upside is that they can be used in a backwards-compatible
> way alongside a whitelist.)
>
I still don't understand why hashing a URL is useful. :(
-mike