- From: Michal Zalewski <lcamtuf@coredump.cx>
- Date: Mon, 31 Jan 2011 16:53:01 -0800
- To: Aryeh Gregor <Simetrical+w3c@gmail.com>
- Cc: "public-web-security@w3.org" <public-web-security@w3.org>
> No, authors can't get simple HTML escaping right, but that applies to > any case where they have to identify all the specific places where > untrusted content is present. Well, that's sort of speculative; we have given them horrible and unintuitive tools to do the job; we are generally not asking them "is this untrusted", but "will this contain this type of bad characters". It's possible to conclude that this means they can't get it right at all, but other explanations are also on the table. That said, I don't think there's a whole lot of a point in arguing, especially without real data to back any of these gut feelings up ;-) My only concern is as noted, of the three XSS prevention methods outlined few posts ago, I can accept that (1) may not offer benefits over (2); but I then don't buy that (3) would, making sandboxed frames a no-op from this perspective. > Why do you think a few dozen iframes with srcdoc will be noticeably > slow? Have you benchmarked? A few dozen is enough for forum posts or > blog comments, for instance. There are several hundred individual attacker-controlled snippets on such a page, typically; punctuated with inline event handlers and so forth. >> 3) The likelihood of messing up base: or srcdoc encoding somewhere is >> probably about the same as that of forgetting to escape text in the >> first place. > > Sure. But simple escaping doesn't let markup through, like bold or > links. The likelihood of messing up srcdoc encoding is vastly, vastly > lower than the likelihood of messing up server-side HTML sanitization. Yes; I never stated that sandboxed frames are useless for this. I think it's their strong suit. But it's a very small blip on the XSS radar. /mz
Received on Tuesday, 1 February 2011 00:53:54 UTC