W3C home > Mailing lists > Public > public-web-security@w3.org > January 2011

Re: XSS mitigation in browsers

From: Michal Zalewski <lcamtuf@coredump.cx>
Date: Thu, 20 Jan 2011 15:23:54 -0800
Message-ID: <AANLkTimk0xEZ6_WVkTXUt-buw_OyCd+CA6gcjwkpYRQB@mail.gmail.com>
To: Sid Stamm <sid@mozilla.com>
Cc: Brandon Sterne <bsterne@mozilla.com>, Adam Barth <w3c@adambarth.com>, public-web-security@w3.org, Lucas Adamski <ladamski@mozilla.com>
> The other case you present is indeed more problematic.

Another interesting problem: what happens if I load legitimate scripts
meant to be hosted in a particular origin, but I load them in the
wrong context or wrong order? For example, what if I take Google Docs
and load a part of Google Mail? Would that render the internal state
of the application inconsistent? Probably... in an exploitable manner?
Not sure.

In the days of "dumb" JS applications, this would not be something to
think about - but there's an increasing trend to move pretty much all
the business logic to increasingly complex client-side JS, with
servers acting as dumb, ACL-enforcing storage (which can be sometimes
substituted by HTML5 storage in offline mode).

(This is also the beef I have with selective XSS filters: I don't
think we can, with any degree of confidence, say that selectively
nuking legit scripts on a page will not introduce XSS vulnerabilities,
destroy user data, etc)

Origin-based script sourcing is better than nothing, but I suspect its
value is limited more than it may be immediately apparent :-(
Whitelisting specific URLs (more messy, but not infeasible); or
requiring inline and remote scripts to have nonces or signatures (this
also solves HTTP latency concerns) may be better.

/mz
Received on Thursday, 20 January 2011 23:24:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 20 January 2011 23:24:49 GMT