Re: Whitelisting external resources by hash (was Re: Finalizing the shape of CSP ‘unsafe-dynamic’)

On Tue, Jun 7, 2016 at 6:15 PM, Brad Hill <hillbrad@gmail.com> wrote:

> +1 that having the smallest possible number of standard ways to identify
> content as the best choice for a coherent platform.
>
> This does argue that whatever name we use it shouldn't have "nonce" in it,
> as there is clearly interest in identifying the entry points for dynamic
> trust propagation with more than just nonces.
>
> On Tue, Jun 7, 2016 at 6:14 AM Mike West <mkwst@google.com> wrote:
>
>> While we're on the topic, I'd like to harden that example via
>>>> externalized hashes (e.g. `sha256-abc...` would allow `<script
>>>> integrity="sha256-abc..." ...>` to load). I'd like to find a mechanism to
>>>> do so in a backwards compatible way. We discussed it briefly at our last
>>>> meeting. Anyone have any good ideas? :)
>>>>
>>>
>>> To properly discuss it, I'd suggest doing it on another thread, maybe? ;)
>>>
>>
>> Done. :)
>>
>>
>>> FWIW my preference would be to allow hashes to whitelist script URLs
>>> rather than contents, and keep SRI as a mechanism to enforce integrity...
>>>
>>
>> What do you mean by "allow hashes to whitelist script URLs"? Adding
>> `SHA256("https://example.com")` to a policy to match a resource at "
>> https://example.com"? I don't see any advantage to doing so (other than
>> policy length, I suppose?).
>>
>
Yes, that's indeed what I had in mind. I agree with the problem pointed out
by Brad (conflicting mechanisms doing the same thing), but allowing hashes
to whitelist expected script#src values would have benefits that could in
some ways offset this downside:
- You could whitelist specific URLs for script-src without risking
redirect-based whitelist bypasses. For example `script-src 'self'
ajax.googleapis.com/totally/safe.js` is an ineffective policy if there is
an open redirect in 'self' due to the ability to load other scripts from
ajax.googleapis.com caused by CSP's path-dropping behavior. A hash would
avoid this problem.
- It would allow more flexibility in whitelisting exact script URLs. Using
a traditional URL whitelist it's not possible to have a safe policy in an
application which uses JSONP (script-src /api/jsonp can be abused by
loading /api/jsonp?callback=evilFunction). With hashes you could allow
SHA256("/api/jsonp?callback=goodFunction") but an attacker could not use
such an interface to execute any other functions.
- It would work with a policy based on 'unsafe-dynamic' / 'drop-whitelist'
-- even if the host-source is dropped, the hash would offer a way to
include specific external scripts.

For CSP to become a useful XSS protection we will almost certainly have to
move away from the whitelist-based model. Dynamic applications can often
use nonces instead, but for static content, or situations where using
nonces would be difficult, I think hashes are a good solution -- one of
their main benefits is that they're already in the spec and any expansion
of their capabilities would be a relatively small change. (Another upside
is that they can be used in a backwards-compatible way alongside a
whitelist.)




>
>>
>>> Otherwise, the "static content" case will be difficult to achieve with
>>> hashes because any changes to the external scripts will break the policy,
>>> since the digest will no longer match.
>>>
>>
>> I'd like to tie the CSP implementation to the SRI implementation. If/when
>> SRI2 supports something other than flat content matches (signatures, etc),
>> then CSP would flow right along.
>>
>> As long as we have SRI that supports the brittle kind of loading behavior
>> that you note above (which I do believe is valuable, though I recognize its
>> drawbacks), it makes sense for CSP to have the same behavior.
>>
>> -mike
>>
>

Received on Tuesday, 7 June 2016 18:28:09 UTC