W3C home > Mailing lists > Public > public-web-security@w3.org > January 2011

Re: More on XSS mitigation (was Re: XSS mitigation in browsers)

From: Adam Barth <w3c@adambarth.com>
Date: Fri, 21 Jan 2011 15:06:13 -0800
Message-ID: <AANLkTinM94v-8+pT8xOCXQhgrVn5NrJXH6euUPYzCz0b@mail.gmail.com>
To: Michal Zalewski <lcamtuf@coredump.cx>
Cc: public-web-security@w3.org
On Fri, Jan 21, 2011 at 2:44 PM, Michal Zalewski <lcamtuf@coredump.cx> wrote:
> Any thoughts of the response below (fished out of that mess of a thread)?

Your approach seems, generally, to provide all the various options
when you think more than one thing might make sense.  At a high level,
I don't think that's a good approach.  We should weigh the various
factors and pick the option that makes the best trade-off, as far as
we can determine.  For example, we shouldn't report violations both
via events and via HTTP pings.  Supporting both options is worse than
just picking one and missing out on some obscure benefit of the other.

I think the nonce approach to whitelisting scripts loads is certainly
worth considering instead of the origin-based whitelisting approach.
However, that issue comes somewhat later in the discussion.  First, we
want to get everyone on the same page w.r.t. scope.


> In general, to sum my comments up... and yes, all of this adds some
> complexity, which is not what you want, but I am wondering if there
> are benefits to doing four things:
> 1) Allowing policies to be defined in HTTP headers; and then parse
> first relevant <meta> if no HTTP header policy is found. There is a
> compelling argument to be made in favor of HTTP headers (a reduced
> likelihood of mishaps, less clutter with complex policies); and in
> favor of <meta> (easier deployment in some use cases). I don't think
> it's productive to build two competing approaches around this
> distinction alone.
> 2) Allowing policy violations to be reported to server-side callbacks
> (which offers improved detection rates for gross policy specification
> errors - in your proposals, these would not get reported if the
> handler itself fails to load); and DOM handling of policy violations
> (which gives more flexibility), as controlled by a policy flag.
> Likewise, I do not see a reason to make this a distinguishing factor
> for any of the approaches.
> 3) Allowing inline scripts guarded by policy-specified nonce tokens
> (<meta> says "inline-script-token=$random", inline scripts have
> <script token="$previously_specified_random">...</script>). This
> eliminates one of the most significant issues with deploying CSP or
> this proposal on sites that are extremely concerned about the overhead
> of extra HTTP requests; for example, much of *.google.com is subject
> to such concerns.
> 4) Having a policy flag to choose between origin-based specifications
> (which are more convenient, but problematic for two reasons: the JS
> API problem outlined earlier, and the out-of-order / out-of-context
> loads described in the mail below) and explicit URL whitelisting (more
> verbose and harder to maintain, but significantly safer from that
> perspective).
> Regards,
> /mz
> ---------- Forwarded message ----------
> From: Michal Zalewski <lcamtuf@coredump.cx>
> Date: Thu, Jan 20, 2011 at 3:23 PM
> Subject: Re: XSS mitigation in browsers
> To: Sid Stamm <sid@mozilla.com>
> Cc: Brandon Sterne <bsterne@mozilla.com>, Adam Barth
> <w3c@adambarth.com>, public-web-security@w3.org, Lucas Adamski
> <ladamski@mozilla.com>
>> The other case you present is indeed more problematic.
> Another interesting problem: what happens if I load legitimate scripts
> meant to be hosted in a particular origin, but I load them in the
> wrong context or wrong order? For example, what if I take Google Docs
> and load a part of Google Mail? Would that render the internal state
> of the application inconsistent? Probably... in an exploitable manner?
> Not sure.
> In the days of "dumb" JS applications, this would not be something to
> think about - but there's an increasing trend to move pretty much all
> the business logic to increasingly complex client-side JS, with
> servers acting as dumb, ACL-enforcing storage (which can be sometimes
> substituted by HTML5 storage in offline mode).
> (This is also the beef I have with selective XSS filters: I don't
> think we can, with any degree of confidence, say that selectively
> nuking legit scripts on a page will not introduce XSS vulnerabilities,
> destroy user data, etc)
> Origin-based script sourcing is better than nothing, but I suspect its
> value is limited more than it may be immediately apparent :-(
> Whitelisting specific URLs (more messy, but not infeasible); or
> requiring inline and remote scripts to have nonces or signatures (this
> also solves HTTP latency concerns) may be better.
> /mz
Received on Friday, 21 January 2011 23:07:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:09:25 UTC