W3C home > Mailing lists > Public > public-web-security@w3.org > January 2011

Re: XSS mitigation in browsers

From: Michal Zalewski <lcamtuf@coredump.cx>
Date: Wed, 19 Jan 2011 15:12:39 -0800
Message-ID: <AANLkTimt845sZ8S9nxjhWcFQKcbwhsW-sCszN_fbx14A@mail.gmail.com>
To: Adam Barth <w3c@adambarth.com>
Cc: public-web-security@w3.org, Sid Stamm <sid@mozilla.com>, Brandon Sterne <bsterne@mozilla.com>
> 1) Instead of using HTTP headers, the policy is expressed in HTML.  Of
> course, authors will want to place the policy as early as possible in
> their document, so we're using a meta element, which can be placed in
> the head of the document.

My general concern is that many complex web applications probably have
at least one location that, if a request to it is made, will return a
payload that contains an attacker-controlled string, but parses as
valid JavaScript. Heck, many 404 pages will probably parse as E4X.
Unless you also enforce strict Content-Type type matching on all
policy-enforced scripts, the mechanism can be likely subverted in most
real-world uses.

In addition, consider that many applications use this common pattern:

Request: GET /some_public_js_api?callback=foo
Response: foo('some_public_data')

These will have valid Content-Type values, but by specifying evil
callbacks, can be subverted likewise - just inject:

<script src="http://our_own_domain/some_public_js_api?callback=do_bad_stuff"></script>

There is also a performance penalty of requiring scripts to be loaded
from an external source, which will hamper adoption to some extent,
since HTTP requests can be expensive. Both of these problems can be
fixed by having a policy that instead allows all <script> tags that
bear a specific header-defined random token as a parameter:

<meta name="script-nonce" content="1234">
<script nonce=1234>...</script>

...but that takes us all back to the discussions we had before - looks
like there is no faith this can be used safely by the general public,
even if beneficial to clued developers. I disagree, but we won't
settle this.

These concerns aside:

1) Implementators should probably be strongly advised that the first
allowed-scripts value must take precedence; even if this is otherwise
mandated by HTML5, the mechanism may be backported to
non-fully-compliant renderers, so it would be good to emphasize this.

2) IFRAME / window resources loaded from data: are a concern, as you
note - inheritance of allowed-scripts along with SOP context
inheritance, with no ability to override it from the document may be a
solution. For plugins, perhaps a similar allowed-plugins policy could
be a solution.

3) Due to the prevalence of open redirectors, the policy should
preferably apply not only to the initial URL, but also to every 30x
hop encountered.


> 3) Instead of reporting violations to the server via HTTP, this
> proposal simply generates a DOM event in the document.  The author of
> the page can listen for the event and wire it up to whatever analytics
> the author uses for other kinds of events (e.g., mouse clicks).

Markup injected on the page before that <script> is loaded could be
used to remove that callback in some cases, e.g.

Hello, $user_name
...
<script src="http://www.example.com/script_with_a_security_callback.js"></script>

...with $user_name = "<script>// Injected, will not execute, but will
consume the next tag".

Loading of specific scripts on a page can be also inhibited due to the
(troubling) behavior of contemporary XSS filters.

All in all, DOM callback would offer relatively little security
benefit, I'm guessing, unless care is taken to load it very early on.

/mz
Received on Wednesday, 19 January 2011 23:13:32 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 19 January 2011 23:13:33 GMT