Signed CSP

I love Content-Security-Policy headers, but I feel that they could do more
to protect end-users from malicious Javascript especially if the entire
host web server gets compromised and attackers are able to tamper with
headers at will.

I would like to propose an extension to the Content-Security-Policy
specification to mitigate the risk of a hacked server distributing malware,
similar to what happened during the Freedom Hosting incident in 2013.

The new proposed header looks like this:

Signed-Content-Security-Policy: /some_request_uri publicKeyA, [publicKeyB,
... ]

WHEREBY:
--------

* /some_request_uri is a message signed with one of the public keys
specified in the header
* /some_request_uri contains a full CSP definition with one caveat: hashes
of script src files are required!
* The proposed signing mechanism is EdDSA, possibly Ed25519 (depending on
CFRG's final recommendation to the TLS working group)
* At least one public key is required, but multiple are allowed (more on
this below)

With this mechanism in place on the client and the server, if you were to
compromise a server (say, a Tor Hidden Service), you would not be able to
tamper with the Javascript to deliver malware onto the client machines
without access to the EdDSA secret key (or a hash collision in the CSP
definition) or fooling the client into accepting a bad public key.

Server Implementation:

Let's say I wish to publish a Tor Hidden Service that hosts, say, the
unredacted Snowden files. These are the steps I would need to take to
prevent malware deployment:

1. Generate N EdDSA secret/public key pairs (N > 2).
2. Put all of the public keys in the SCSP header.
3. Use only one secret key for signing from an airgapped machine whenever a
website update is required. The rest should remain on encrypted thumb
drives which are in hidden caches.

Client Implementation:

Upon accessing a website with a CSP header, render the fingerprints and ask
the user if they trust this series of hexits. If someone attempts to
add/replace any of the public keys, immediately disable Javascript and
panic to the user. This is basically SSH model of trust, but in the event
of a signing key compromise, the other keys can be used and the untrusted
public key can be removed without causing a ruckus to the end user.

Users' trust decisions should be stored in a separate file than cert8.db,
and users should be able to tell their browser where to store it. In
"private browsing" modes, this file should be cloned into memory and never
written back to disk without explicit user action (e.g. for Tor Browser
Bundle users).

This is obviously a very rough draft, but I would love to get feedback on
it and, if everyone approves, move forward with developing it into
something greater. (Browser extension? Internet Standard? Not my place to
say :)

Scott Arciszewski

Received on Sunday, 15 February 2015 15:28:37 UTC