W3C home > Mailing lists > Public > www-archive@w3.org > July 2009

Re: Comments on the Content Security Policy specification

From: Bil Corry <bil@corry.biz>
Date: Thu, 16 Jul 2009 14:20:19 -0500
Message-ID: <4A5F7D73.3080709@corry.biz>
To: Ian Hickson <ian@hixie.ch>
CC: Brandon Sterne <bsterne@mozilla.com>, Sid Stamm <sid@mozilla.com>, dev-security@lists.mozilla.org, www-archive@w3.org, jonas@sicking.cc
Ian Hickson wrote on 7/16/2009 5:51 AM: 
> I think that this complexity, combined with the tendency for authors to 
> rely on features they think are solvign their problems, would actually 
> lead to authors writing policy files in what would externally appear to be 
> a random fashion, changing them until their sites worked, and would then 
> assume their site is safe. This would then likely make them _less_ 
> paranoid about XSS problems, which would further increase the possibility 
> of them being attacked, with a good chance of the policy not actually 
> being effective.

I think your point that CSP may be too complex and/or too much work for some developers is spot on.  Even getting developers to use something as simple as the Secure flag for cookies on HTTPS sites is still a challenge.  And if we can't get developers to use the Secure flag, the chances of getting sites configured with CSP is daunting at best.  More to my point, getting developers to use *any* security feature is daunting, so any solution to a security issue that doesn't involve protection by default is going to lack coverage, either due to lack of deployment, or misconfigured deployment.  And since protection by default (in this case) would mean broken web sites, we're left with an opt-in model that achieves only partial coverage.

At first glance, it may seem like a waste of time to implement CSP if the best we can achieve is only partial coverage, but instead of looking at it from the number of sites covered, look at it from the number of users covered.  If a large site such as Twitter were to implement it, that's millions of users protected that otherwise wouldn't be.



> I think CSP should be more consistent about what happens with multiple 
> policies. Right now, two headers will mean the second is ignored, and two 
> <meta>s will mean the second is ignored; but a header and a <meta> will 
> cause the intersection to be used. Similarly, a header with both a policy 
> and a URL will cause the most restrictive mode to be used (and both 
> policies to be ignored), but a misplaced <meta> will cause no CSP to be 
> applied.

I agree.  There's been some discussion about removing <meta> support entirely and/or allowing multiple headers with a intersection algorithm, so depending on how those ideas are adopted, it makes sense to ensure consistency across the spec.



> I don't think UAs should advertise support for this feature in their HTTP 
> requests. Doing this for each feature doesn't scale. Also, browsers are 
> notoriously bad at claiming support accurately; since bugs will be present 
> whatever happens, servers are likely to need to do regular browser 
> sniffing anyway, even if support _is_ advertised. On the long term, all 
> browsers would support this, and during the transition period, browser 
> sniffing would be fine. (If we do add the advertisment, we can never 
> remove it, even if all browsers support it -- just like we can't remove 
> the "Mozilla/4.0" part of every browser's UA string now.)

This is under discussion too; if you have an interest, here's the most recent thread where it's being discussed:

http://groups.google.com/group/mozilla.dev.security/browse_thread/thread/571f1495e6ccf822#anchor_1880c3647a49d3e7



- Bil
Received on Thursday, 16 July 2009 19:21:35 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 7 November 2012 14:18:25 GMT