W3C home > Mailing lists > Public > public-webapps@w3.org > July to September 2008

Re: Cross-Site Requests, Users, UI (and What We're Trying to Fix)

From: Brandon Sterne <bsterne@mozilla.com>
Date: Wed, 09 Jul 2008 15:43:10 -0700
Message-ID: <48753EFE.6040503@mozilla.com>
To: public-webapps@w3.org
CC: Gerv Markham <gerv@mozilla.org>

On 06/25/2008 08:30 PM, Maciej Stachowiak wrote:
>> 2. Mitigation of XSS (Cross Site Scripting) and CSRF (Cross Site  
>> Request Forgery) Vulnerabilities.
> This one looks complicated and I'll need some time to review to form  
> an opinion. Some critical details seem to be missing from the  
> proposal, for example, one of the mechanisms calls for a preflight  
> policy check request but it is not described how to do this request.

There are admittedly areas in the proposal that need to be more fully
defined, re-written, or left out altogether.  I was explicit about the
proposal not being a specification document, but rather a way to
(re)introduce a set of concepts to the broader Internet Security and
Developer communities, and start the discussion process that will
hopefully lead to the standardization of the policy framework.  I am
very excited to see the discussion starting within a W3C context!

With regard to the specific example you called out, preflight policy
checks, there was a proposed method described here:

The W3C Access Control specification was also mentioned as a potential
model to follow.  The proposal was not intended to be as detailed as a
W3C spec, but I am happy to address any specific questions or concerns.
 I am hoping that many of these details can be debated here.

On 07/03/2008 03:27 PM, Jon Ferraiolo wrote:
> Is there a reason why the whitelist
> information is available only via HTTP headers (versus markup)? So that the
> info doesn't appear in View Source? (But wouldn't it still be viewable via
> Firebug?)

Initially, this proposal was made using HTTP headers only because it is
generally harder to inject headers into an HTTP response than the body
of the page and the idea was to minimize the attack surface where the
policies could be tampered with.

Since the initial proposal, there have been numerous requests to have
alternate methods for policy transmission.  Many people have suggested a
model similar to the one employed by Adobe for cross-domain Flash in
which a policy file could be placed on the server and the user agent
could either: 1) know where to look for it, or 2) could be directed to
the policy file location via HTTP headers or markup, e.g. <meta> tags.
There are compelling cases that demonstrate the need for both as there
are server operators who will have the ability to implement one or the
other (headers or policy file), but not both.  I can elaborate on these
if necessary.

The headers-vs-markup decision had nothing to do with hiding the
policies from the user.

On 07/04/2008 01:52 AM, Thomas Roessler wrote:
> Without speaking to the scope question, I think this is an
> interesting area of work.  I wonder how it might dovetail with
> ideas such as Google's Caja, and more general policy-enabling of
> in-browser method invocation models, and would be curious to hear
> your views on that.

I think that Site Security Policy and Caja are complementary
technologies.  In my mind, the former will be useful for restricting
where web content in a page can come from (mitigating script injection)
and who it can talk to (mitigating CSRF), while the latter will be used
to enable a subset of JavaScript functionality on a website.  Caja will
be especially useful on sites that need to incorporate untrusted code,
despite it coming from known "good locations".

As was mentioned elsewhere, it may be a good idea to split this
discussion into its own separate thread, but I'll leave that to someone
who has been a member of the list for more than 2 days :-)  I look
forward to continuing the discussion.

Brandon Sterne
Received on Thursday, 10 July 2008 05:47:21 UTC

This archive was generated by hypermail 2.3.1 : Friday, 27 October 2017 07:26:11 UTC