RE: Signed CSP

What does the Wired article tell us that helps? I get that it is an important problem, I just don't see how your proposes addresses that problem.

Sent from my Windows Phone
________________________________
From: Scott Arciszewski<mailto:kobrasrealm@gmail.com>
Sent: ý2/ý15/ý2015 2:27 PM
To: Crispin Cowan<mailto:crispin@microsoft.com>
Cc: public-webappsec@w3.org<mailto:public-webappsec@w3.org>
Subject: Re: Signed CSP

http://www.wired.com/2013/09/freedom-hosting-fbi/

On Sun, Feb 15, 2015 at 5:15 PM, Crispin Cowan <crispin@microsoft.com<mailto:crispin@microsoft.com>> wrote:
So the client trusts the offline signing key, but not the server.

An attacker can compromise everything about the server, except it’s CSP: JS, content, etc.

So the attacker can load this server full of lies and such, but can’t change its CSP. What threats does that defend the client against?

•         This site can’t be XSS’d by another site: don’t care, this site is already completely p0wned.

•         Other sites can’t be XSS’d by this site: I don’t think that this site’s CSP assures that, and don’t really care, the attacker will just XSS you from a different place.

So I’m still not seeing a real threat that this really mitigates.

From: Scott Arciszewski [mailto:kobrasrealm@gmail.com<mailto:kobrasrealm@gmail.com>]
Sent: Sunday, February 15, 2015 1:23 PM
To: Crispin Cowan
Cc: public-webappsec@w3.org<mailto:public-webappsec@w3.org>
Subject: Re: Signed CSP

What value does this proposal deliver, that you do not get by combining HSTS pinning with CSP?

The signing key remains offline, so an attacker cannot forge signatures. HSTS + CSP does not achieve this since the SSL private key must remain accessible to the server.

The goal here is to disrupt the US government's malware campaigns on the Tor network, but it could also be advantageous against less sophisticated threats.
On Sun, Feb 15, 2015 at 3:35 PM, Crispin Cowan <crispin@microsoft.com<mailto:crispin@microsoft.com>> wrote:
What value does this proposal deliver, that you do not get by combining HSTS pinning with CSP?

In particular, since the threat you are trying to defend against is a compromised server, the most the client can do is ensure that this is still the server you think it is. Even that is dubious, because a compromised server gives the SSL private key to the attacker. Doing more than that would seem to prevent the legitimate site admin from changing policy.

From: Scott Arciszewski [mailto:kobrasrealm@gmail.com<mailto:kobrasrealm@gmail.com>]
Sent: Sunday, February 15, 2015 7:28 AM
To: public-webappsec@w3.org<mailto:public-webappsec@w3.org>
Subject: Signed CSP

I love Content-Security-Policy headers, but I feel that they could do more to protect end-users from malicious Javascript especially if the entire host web server gets compromised and attackers are able to tamper with headers at will.

I would like to propose an extension to the Content-Security-Policy specification to mitigate the risk of a hacked server distributing malware, similar to what happened during the Freedom Hosting incident in 2013.
The new proposed header looks like this:
Signed-Content-Security-Policy: /some_request_uri publicKeyA, [publicKeyB, ... ]
WHEREBY:
--------
* /some_request_uri is a message signed with one of the public keys specified in the header
* /some_request_uri contains a full CSP definition with one caveat: hashes of script src files are required!
* The proposed signing mechanism is EdDSA, possibly Ed25519 (depending on CFRG's final recommendation to the TLS working group)
* At least one public key is required, but multiple are allowed (more on this below)
With this mechanism in place on the client and the server, if you were to compromise a server (say, a Tor Hidden Service), you would not be able to tamper with the Javascript to deliver malware onto the client machines without access to the EdDSA secret key (or a hash collision in the CSP definition) or fooling the client into accepting a bad public key.
Server Implementation:

Let's say I wish to publish a Tor Hidden Service that hosts, say, the unredacted Snowden files. These are the steps I would need to take to prevent malware deployment:
1. Generate N EdDSA secret/public key pairs (N > 2).
2. Put all of the public keys in the SCSP header.
3. Use only one secret key for signing from an airgapped machine whenever a website update is required. The rest should remain on encrypted thumb drives which are in hidden caches.
Client Implementation:
Upon accessing a website with a CSP header, render the fingerprints and ask the user if they trust this series of hexits. If someone attempts to add/replace any of the public keys, immediately disable Javascript and panic to the user. This is basically SSH model of trust, but in the event of a signing key compromise, the other keys can be used and the untrusted public key can be removed without causing a ruckus to the end user.
Users' trust decisions should be stored in a separate file than cert8.db, and users should be able to tell their browser where to store it. In "private browsing" modes, this file should be cloned into memory and never written back to disk without explicit user action (e.g. for Tor Browser Bundle users).

This is obviously a very rough draft, but I would love to get feedback on it and, if everyone approves, move forward with developing it into something greater. (Browser extension? Internet Standard? Not my place to say :)
Scott Arciszewski

Received on Monday, 16 February 2015 01:48:50 UTC