Re: XSS mitigation in browsers

> * It should be possible to specify policy without messing around in
> headers. With the 'extra policy can only make things more restrictive'
> setup, I don't see why this isn't a good idea.

Yes, I think the objections against <meta> policies are a stretch in
some circumstances. How do you imagine turning a policy like CSP or
Adam's proposal into an "additively restrictive" one? The principle
behind both approaches is that you start with default deny, and
whitelist permitted origins.

The way to do it safely is to only allow the whitelist to be specified
once, and ignore subsequent attempts; I am not sure how this fits your
model?

> * It should be possible to handle violations programmatically. As I
> argued before, I think this is a cleaner/simpler/better/flexible
> design than the current CSP design.

They are accessible programmatically in all the approaches proposed;
the difference is that in CSP, they are accessible on server-side, and
in Adam's proposal, on client side.

The counter-argument is that when you have a policy violation,
client-side JS may already be busted. If the initial violation is
caused by the inability to load the monitoring JS itself due to a
policy problem, you do not get a notification. So, CSP is not bad in
that regard.

But I really think that in both cases, it's a false dichotomy. There
is no reason why CSP could not accept both policy formats (HTTP header
and <meta>), and if HTTP headers take precedence over <meta>, there is
no security trade-off and a usability gain. There is also no reason
why CSP could not support reporting to local JS or to a server-side
callback depending on a policy parameter.

If these are the two most important distinguishing factors between the
proposals, then I think we're sort of doing it wrong =) The added
complexity of supporting both modes is not that significant, and the
arguments are bound to be a matter of your belief systems, not any
rational facts.

/mz

Received on Friday, 21 January 2011 22:14:33 UTC