Re: More on XSS mitigation (was Re: XSS mitigation in browsers)

Any thoughts of the response below (fished out of that mess of a thread)?

In general, to sum my comments up... and yes, all of this adds some
complexity, which is not what you want, but I am wondering if there
are benefits to doing four things:

1) Allowing policies to be defined in HTTP headers; and then parse
first relevant <meta> if no HTTP header policy is found. There is a
compelling argument to be made in favor of HTTP headers (a reduced
likelihood of mishaps, less clutter with complex policies); and in
favor of <meta> (easier deployment in some use cases). I don't think
it's productive to build two competing approaches around this
distinction alone.

2) Allowing policy violations to be reported to server-side callbacks
(which offers improved detection rates for gross policy specification
errors - in your proposals, these would not get reported if the
handler itself fails to load); and DOM handling of policy violations
(which gives more flexibility), as controlled by a policy flag.
Likewise, I do not see a reason to make this a distinguishing factor
for any of the approaches.

3) Allowing inline scripts guarded by policy-specified nonce tokens
(<meta> says "inline-script-token=$random", inline scripts have
<script token="$previously_specified_random">...</script>). This
eliminates one of the most significant issues with deploying CSP or
this proposal on sites that are extremely concerned about the overhead
of extra HTTP requests; for example, much of *.google.com is subject
to such concerns.

4) Having a policy flag to choose between origin-based specifications
(which are more convenient, but problematic for two reasons: the JS
API problem outlined earlier, and the out-of-order / out-of-context
loads described in the mail below) and explicit URL whitelisting (more
verbose and harder to maintain, but significantly safer from that
perspective).

Regards,
/mz

---------- Forwarded message ----------
From: Michal Zalewski <lcamtuf@coredump.cx>
Date: Thu, Jan 20, 2011 at 3:23 PM
Subject: Re: XSS mitigation in browsers
To: Sid Stamm <sid@mozilla.com>
Cc: Brandon Sterne <bsterne@mozilla.com>, Adam Barth
<w3c@adambarth.com>, public-web-security@w3.org, Lucas Adamski
<ladamski@mozilla.com>


> The other case you present is indeed more problematic.

Another interesting problem: what happens if I load legitimate scripts
meant to be hosted in a particular origin, but I load them in the
wrong context or wrong order? For example, what if I take Google Docs
and load a part of Google Mail? Would that render the internal state
of the application inconsistent? Probably... in an exploitable manner?
Not sure.

In the days of "dumb" JS applications, this would not be something to
think about - but there's an increasing trend to move pretty much all
the business logic to increasingly complex client-side JS, with
servers acting as dumb, ACL-enforcing storage (which can be sometimes
substituted by HTML5 storage in offline mode).

(This is also the beef I have with selective XSS filters: I don't
think we can, with any degree of confidence, say that selectively
nuking legit scripts on a page will not introduce XSS vulnerabilities,
destroy user data, etc)

Origin-based script sourcing is better than nothing, but I suspect its
value is limited more than it may be immediately apparent :-(
Whitelisting specific URLs (more messy, but not infeasible); or
requiring inline and remote scripts to have nonces or signatures (this
also solves HTTP latency concerns) may be better.

/mz

Received on Friday, 21 January 2011 22:45:22 UTC