W3C home > Mailing lists > Public > public-web-security@w3.org > January 2011

Re: XSS mitigation in browsers

From: gaz Heyes <gazheyes@gmail.com>
Date: Thu, 20 Jan 2011 23:38:03 +0000
Message-ID: <AANLkTi=VWt+bGFzZT=jcpRZOdHs9R=bDc-CP1arDGxG3@mail.gmail.com>
To: Michal Zalewski <lcamtuf@coredump.cx>
Cc: Sid Stamm <sid@mozilla.com>, Brandon Sterne <bsterne@mozilla.com>, Adam Barth <w3c@adambarth.com>, public-web-security@w3.org, Lucas Adamski <ladamski@mozilla.com>
On 20 January 2011 23:23, Michal Zalewski <lcamtuf@coredump.cx> wrote:

> (This is also the beef I have with selective XSS filters: I don't
> think we can, with any degree of confidence, say that selectively
> nuking legit scripts on a page will not introduce XSS vulnerabilities,
> destroy user data, etc)
>

This would be a good argument for a native sandbox in every browser, if we
can detect an attack (certainly possible) then we can react to it by placing
the browser in a sandbox rather than blocking script/replacing output. The
browser is in the best position to control the content as it's rendering it.
If we know a pattern matches and we know where it occurs then the output can
be sandboxed where the injection occurs. We need a way to sandbox HTML and
JavaScript, web workers would be a nice way to execute JavaScript safely if
they didn't send cookies with requests to import scripts and allowed
deletion of native properties.
Received on Thursday, 20 January 2011 23:38:35 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 20 January 2011 23:38:36 GMT