W3C home > Mailing lists > Public > www-archive@w3.org > July 2009

Re: Comments on the Content Security Policy specification

From: Brandon Sterne <bsterne@mozilla.com>
Date: Fri, 17 Jul 2009 14:58:40 -0700
Message-ID: <4A60F410.90707@mozilla.com>
To: Ian Hickson <ian@hixie.ch>
CC: Daniel Veditz <dveditz@mozilla.com>, Sid Stamm <sid@mozilla.com>, www-archive@w3.org, jonas@sicking.cc, dev-security@lists.mozilla.org
On 7/16/09 8:17 PM, Ian Hickson wrote:
> On Thu, 16 Jul 2009, Daniel Veditz wrote:
>> Ian Hickson wrote:
>>> * The more complicated something is, the more mistakes people will 
>>> make.
>> We encourage people to use the simplest policy possible. The additional 
>> options are there for the edge cases.
> 
> It doesn't matter what we encourage. Most authors are going to be using 
> this through copy-and-paste from tutorials that were written by people who 
> made up anything they didn't work out from trial and error themselves.

Dan's point is absolutely true.  The majority of sites will be able to
benefit from simple, minimal policies.  If a site hosts all its own
content then a policy of "X-Content-Security-Policy: allow self" will
suffice and will provide all the XSS protection out of the box.  I tend
to think this will be the common example that gets cut-and-pasted the
majority of the time.  Only more sophisticated sites will need to delve
into the other features of CSP.

Content Security Policy has admittedly grown more complex since it's
earliest design but only out of necessity.  As we talked through the
model we have realized that a certain about of complexity is in fact
necessary to support various use cases which might not common on the
Web, but need to be supported.

>>> I believe that if one were to take a typical Web developer, show him 
>>> this:
>>>
>>>    X-Content-Security-Policy: allow self; img-src *;
>>>                               object-src media1.com media2.com;
>>>                               script-src trustedscripts.example.com
>>>
>>> ...and ask him "does this enable or disable data: URLs in <embed>" or 
>>> "would an onclick='' handler work with this policy" or "are framesets 
>>> enabled or disabled by this set of directives", the odds of them 
>>> getting the answers right are about 50:50.
>> Sure, if you confuse them first by asking about "disabling". 
>> _everything_ is disabled; the default policy is "allow none". If you ask 
>> "What does this policy enable?" the answers are easier.
> 
> I was trying to make the questions neutral ("enable or disable"). The 
> authors, though, aren't going to actually ask these questions explicitly, 
> they'll just subconsciously form decisions about what the answers are 
> without really knowing that's what they're doing.

I don't think it makes sense for sites to work backwards from a complex
policy example as the best way to understand CSP.  I imagine sites
starting with the simplest policy, e.g. "allow self", and then
progressively adding policy as required to let the site function
properly.  This will result in more-or-less minimal policies being
developed, which is obviously best from a security perspective.

>> data URLs? nope, not mentioned
>> inline handlers? nope, not mentioned
> 
> How is an author supposed to know that anything not mentioned won't work?
> 
> And is that really true?
> 
>    X-Content-Security-Policy: allow *; img-src self;
> 
> Are cross-origin scripts enabled? They're not mentioned, so the answer 
> must be no, right?
> 
> This isn't intended to be a "gotcha" question. My point is just that CSP 
> is too complicated, too powerful, to be understood by many authors on the 
> Web, and that because this is a security technology, this will directly 
> lead to security bugs on sites (and worse, on sites that think they are 
> safe because they are using a Security Policy).

I don't think your example is proof at all that CSP is too complex.  If
I were writing that policy, my spidey senses would start tingling as
soon as I wrote "allow *".  I would expect everything to be in-bounds at
that point.  This is a whitelist mechanism after all.

>>>    X-Content-Security-Policy: allow https://self:443
>> Using "self" for anything other than a keyword is a botch and I will 
>> continue to argue against it. If you mean "myhost at some other scheme" 
>> then it's not too much to ask you to spell it out. I kind of liked 
>> Gerv's suggestion to syntactically distinguish keywords from host names, 
>> too.
> 
> The examples I gave in the previous e-mail were all directly from the 
> spec itself.

I also agree that this example is awkward.  In fact, the scheme and port
are inherited from the protected document if they are not specified in
the policy, so this policy would only make sense if it were a non-https
page which wanted to load all its resources over https.

I don't feel strongly about keeping that feature.  Perhaps we should
allow self to be used not-in-conjunction with scheme or port as Dan says.

>>> ...I don't think a random Web developer would be able to correctly 
>>> guess whether or not inline scripts on the page would work, or whether 
>>> Google Analytics would be disabled or not.
>> Are inline scripts mentioned in that policy? Is Google Analytics? No, so 
>> they are disabled.
> 
> _I_ know the answer. I read the spec. My point is that it isn't intuitive 
> and that authors _will_ guess wrong.

Sorry, but I think this is also weak evidence for too much complexity.
This is a whitelist technology so if a source isn't whitelisted, it
won't be allowed.  This is a fundamental aspect of CSP which I think
will be the starting point of reference for most developers.

>> I'll admit that the default "no inline" behavior is not at all obvious 
>> and people will just have to learn that
> 
> This strategy has not worked in the past.

They'll learn it as soon as they apply a policy and all their inline
script stops working :-)

>> We are not creating this tool for naive, untrained people.
> 
> Naive, untrained people are who is going to use it.
> 
>> Taking that approach to any security technology is going to get you into 
>> trouble.
> 
> Have you seen the Web? :-)
> 
> I agree entirely. But we don't get to require that people pass a test 
> before they use a technology. They'll use it because they heard of it on 
> w3schools, or because someone on digg linked to it, or because their 
> friend at the local gym heard his sysadmin team is using it.
> 
> We know that people do this. We have to take that into account.

People can't generally hurt themselves if they start with "allow self"
and incrementally relax the policy until their site functions again.

>>> This would then likely make them _less_ paranoid about XSS problems,
>> I hope not, since it does nothing to help their visitors using legacy 
>> browsers that don't support CSP.
> 
> That's a temporary situation. In 20 years, when everyone supports it and 
> nobody cares about today's browsers, CSP will make people less paranoid.

It is possible that is the case, but I don't think it is justifiable to
not provide tools because we are worried that people will come to rely
upon them for security.  An analogy: seat belts were introduced in the
auto industry and yet people still (attempt to) drive safely even though
they know they're safely buckled-up.  Industry reliance upon a anti-XSS
mechanism such as CSP is a problem I would be happy to have.

>> CSP is a back-up insurance policy, defense-in-depth and not the defense 
>> itself.
> 
> Again, you and I know that. The people using it won't.
> 
>>> I'm concerned about the round-trip latency of fetching an external 
>>> policy
>> Us too. We don't like the complexity added by the external policy file, 
>> but it was a popular request. It could reduce bandwidth for a site with 
>> a complex policy since it would be cachable.
> 
> I would recommend making the entire policy language signficantly simpler, 
> such that it can be expressed in less space than a URL's length, which 
> would solve this problem as well as the above issues.

I think the vast majority of sites' policies will be less than a URL
worth of text.

>>> or would it block page loading?
>> It will block page _parsing_, just as a <script> tag must (except, of 
>> course, before parsing starts).
> 
> I think that would basically make the external policy unusable for Google 
> properties. Specifying a policy inline would still be ok though.
> 
> 
>> We're seriously considering dropping <meta> support.
> 
> I would support dropping <meta> support.

I do too.  I haven't heard anyone object strongly to Sid's proposal to
drop <meta> support, so I imagine we'll be taking it out soon.

>>> I don't think UAs should advertise support for this feature in their 
>>> HTTP requests. Doing this for each feature doesn't scale.
>> I personally agree for all the reasons you mention, but we still have a 
>> potential versioning problem to resolve. Or not -- if we do nothing we 
>> could always add a CSP-2 header in the future if necessary. I'm just 
>> worried that it's unlikely that we thought of everything the first time 
>> through.
> 
> Just make sure it's forwards-compatible, so you can add new features, 
> then you don't need to version it. (The same way HTML and CSS and the DOM 
> have been designed, for instance.)

I think Dan summarized the trade-off nicely here:
http://groups.google.com/group/mozilla.dev.security/msg/787c87362d08bf5e

I can see why folks want to avoid a version string but several of us
have limited confidence in our ability to design with
forward-compatibility.  Perhaps you could provide some guidance in this
particular area since you have a lot of experience doing so.

Thanks for the feedback, Ian.  It's great to have your voice in this
discussion.

Cheers,
Brandon
Received on Friday, 17 July 2009 21:59:18 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 7 November 2012 14:18:25 GMT