- From: Robin Berjon <robin@w3.org>
- Date: Tue, 03 Jun 2014 16:58:28 +0200
- To: Boris Zbarsky <bzbarsky@MIT.EDU>, public-webapps@w3.org
On 02/06/2014 15:08 , Boris Zbarsky wrote: > On 6/2/14, 9:02 AM, James M Snell wrote: >> I suppose that If you >> needed the ability to sandbox them further, just wrap them inside a >> sandboxed iframe. > > The worry here is sites that currently have html filters for > user-provided content that don't know about <link> being able to run > scripts. Clearly once a site knows about this they can adopt various > mitigation strategies. The question is whether we're creating XSS > vulnerabilities in sites that are currently not vulnerable by adding > this functionality. > > P.S. A correctly written whitelist filter will filter these things out. > Are we confident this is standard practice now? I haven't bumped into a blacklist filter in a *long* while. I suspect that any that might exist will be hand-rolled and not part of any platform. The odds are pretty strong that they're already unsafe if not wide open. So I would say there's a risk, but not a huge one. That said, I still prefer Simon's approach. PS: I've been wondering if adding an HTML sanitiser to the platform might make sense. -- Robin Berjon - http://berjon.com/ - @robinberjon
Received on Tuesday, 3 June 2014 15:03:09 UTC