W3C home > Mailing lists > Public > public-web-security@w3.org > January 2011

Re: [Content Security Policy] Proposal to move the debate forward

From: Adam Barth <w3c@adambarth.com>
Date: Sun, 30 Jan 2011 22:26:38 -0800
Message-ID: <AANLkTinJc+70N8Zo9QhyDQ_y2K8ZZFhoKur-bRqrrJfB@mail.gmail.com>
To: Lucas Adamski <ladamski@mozilla.com>
Cc: public-web-security@w3.org
On Sun, Jan 30, 2011 at 4:07 PM, Lucas Adamski <ladamski@mozilla.com> wrote:
> On 1/28/2011 12:03 PM, Adam Barth wrote:
>> Well, the current design of CSP is difficult to implement
>> incrementally.  Even the simplest policies affect font loading.  It's
>> very convoluted to define a subset of the language that lets you
>> control script execution without also controlling font loading.
>
> I'm still trying to triangulate your specific, core concern about
> complexity as the overriding criterion.  Since we apparently established
> that the cost of browser implementation is not that high, and the
> complexity of the policy language is similar in the supposedly more
> common, minimal case of XSS mitigation (i.e. its at least proportional
> to the detail and benefit as desired by the web developer), what
> specific cost are we incurring in the proposal CSP model vs. your proposal?

As a starting point, I would like to be able to mitigate script
injection without coupling that with controlling how fonts are loaded.
 I don't really see much value in controlling where sites load fonts
from.  I do see a lot of value in controlling which scripts a site
executes.  In CSP, they two concerns are tied together so tightly that
they cannot be addressed separately.

> Is it the cost of standardization?  If so I'd point out (as others have
> already mentioned) one of the primary goals of this group and CSP in
> general is to develop a "A policy language intended to enable web
> designers or server administrators to adjust the HTML5 security policy,
> and specify how content interacts on their web sites."   We don't move
> towards that goal by ignoring how these various security mechanisms
> should interact with each other and focusing exclusively on the most
> minimal implementation for a narrow attack vector.  To standardize
> something sooner while undermining one of the key goals doesn't feel
> like a win to me.

I don't really understand that paragraph.  I agree that we should
provide a policy framework that we can re-use for more things in the
future.  Today, however, I'm interested in mitigating scripting
injection.

> Is it just extensibility?  It seems like you are perhaps fundamentally
> objecting to the default "allow:" policy, insofar that it is implicitly
> coupled to all of the other defined directives.

Yes.  The "allow" construct is the worst offender of coupling concerns
I don't care about (e.g., font loading) with concerns I do care about
(e.g., script injection).

> Yet, we cannot evaluate
> the utility of the "allow:" directive in a vacuum of only XSS
> mitiations.  We have to consider how important that directive is once
> you have a more rich and granular policy language.   It may not seem all
> that useful, yet without it a simple XSS mitigation policy cannot be
> written simply.

Sure it can:

script-whitelist="example.com *.akami.net"

With suitable semantics, that sort of policy goes a long way towards
mitigating XSS.  Now, does it get every corner case of non-script
injection?  No.  Does it provide a lot of value at a reasonable cost?
Yes.  Sign me up.

> Ignoring that and focusing just on XSS mitigation
> doesn't actually solve the problem; it just kicks the can down the road
> to the point that its much harder to solve those actual problems once
> the fundamental syntax has already been fixed.

I disagree.  If we're forward-looking we can do a good job of
mitigating XSS today and paving the way to addressing other use cases
in the future.

> If the concern is around the future extensibility then by all means lets
> focus the discussion on that.  But removing all other non-XSS
> mitigations simply dodges the fundamental problem and only makes it
> harder to solve it later.

I don't think that's the case at all.  You're locked into a worldview
that the more important things are subresource loads, but that's just
one piece of the puzzle.  Looking that the bigger picture, it seems
like we can't really address everything in version 1.0 anyway.  It's
more important to nail the most important use cases first and pave the
path to supporting more use cases in the future.

>>> On the other hand, if all the browsers
>>> implement different initial subsets of CSP, that's not particularly helpful
>>> for site authors. If a discussion of what the initial feature set is gives
>>> each browser maker some idea of what the others are likely to implement, and
>>> also exposes decision makers to the arguments in support of each feature,
>>> that will be a win.
>>
>> The model works well for CSS.  Different vendors implement different
>> features at different times, including experimental features.  The
>> popular ones get folding into the main spec.  That seems like the
>> model we want here, both now and for the future.
>
> Those features are additive and their implementation is mostly
> orthogonal.  It seems like security models are different... security
> mechanisms are largely subtractive,

Being subtractive actually helps here, especially if folks use CSP as
a second line of defense (as they should), but we've been over that
ground before.

> and the interactions between them
> determine whether or not a given threat can be effectively mitigated.
> We are defining a policy language that will be applied to existing
> content, not simply additions to an existing soup of tags.  Having a
> bunch of independent security mechanisms without a common syntax,
> delivery mechanisms or clearly defined intersections between them
> undermines the intended benefit to web developers and admins.

Right, which is why I'm all for a common syntax, delivery mechanism,
and clearly defined interactions.  That doesn't mean we need to
deliver every conceivable use case in version 1.  For example, <img>,
<video>, and <canvas> have a common syntax, delivery mechanism, and
clearly defined interactions, but they were all designed, implemented,
and deployed at different times.

> What
> would be useful is a consistent language that lets web developers and/or
> admins have a reference like OWASP provide clear examples that state "if
> you want to protect your password reset page, here is a list of threats
> to be concerned about, and a corresponding sample CSP policy".

I'm all for that.  None of that is related whether the script
injection mechanism is tightly bound to the mechanism for restricting
font loading.

> I like the use case page you started;  I think that is a good mechanisms
> to organize our discussion around.  I hope some of the likely users of
> these security mechanisms will chime in with their specific use cases.

Thanks.  Please feel free to elaborate that page.

Adam
Received on Monday, 31 January 2011 06:27:44 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 31 January 2011 06:27:47 GMT