Re: Whitelisting external resources by hash (was Re: Finalizing the shape of CSP ‘unsafe-dynamic’)

(risk of looking stupid here)

Why isn't a nonce isn't appropriate for truly static content?  Yes, it
doesn't change, but the content is static.  Is it to defend against the
case that the static content has DOMXSS that happens to do a
parser-inserted script and the attacker can control it to propagate the
nonce there?

On Tue, Jun 7, 2016 at 12:59 PM Mike West <> wrote:

> On Tue, Jun 7, 2016 at 8:27 PM, Artur Janc <> wrote:
>> - You could whitelist specific URLs for script-src without risking
>> redirect-based whitelist bypasses. For example `script-src 'self'
>>` is an ineffective policy if there
>> is an open redirect in 'self' due to the ability to load other scripts from
>> caused by CSP's path-dropping behavior. A hash would
>> avoid this problem.
> I think you might have something in mind other than just hashing the URL?
> It's not clear to me how a different spelling of the URL would mitigate the
> issues that lead to the path-dropping-after-redirect behavior. Denying
> redirects entirely, perhaps?
>> - It would allow more flexibility in whitelisting exact script URLs.
>> Using a traditional URL whitelist it's not possible to have a safe policy
>> in an application which uses JSONP (script-src /api/jsonp can be abused by
>> loading /api/jsonp?callback=evilFunction). With hashes you could allow
>> SHA256("/api/jsonp?callback=goodFunction") but an attacker could not use
>> such an interface to execute any other functions.
> Is hashing important here? Would extending the source expression syntax to
> include query strings be enough?
>> - It would work with a policy based on 'unsafe-dynamic' /
>> 'drop-whitelist' -- even if the host-source is dropped, the hash would
>> offer a way to include specific external scripts.
>> For CSP to become a useful XSS protection we will almost certainly have
>> to move away from the whitelist-based model.
> I think we agree that Google will certainly need to move away from the
> whitelist-based model. Though I agree with you that a nonce-based model is
> simpler to deploy for many sites, GitHub seems to be a reasonable
> counter-example to general necessity.
>> Dynamic applications can often use nonces instead, but for static
>> content, or situations where using nonces would be difficult, I think
>> hashes are a good solution -- one of their main benefits is that they're
>> already in the spec and any expansion of their capabilities would be a
>> relatively small change. (Another upside is that they can be used in a
>> backwards-compatible way alongside a whitelist.)
> I still don't understand why hashing a URL is useful. :(
> -mike

Received on Tuesday, 7 June 2016 23:50:41 UTC