W3C home > Mailing lists > Public > public-webapi@w3.org > April 2006

Re: (XMLHttpRequest 2) Proposal for cross-site extensions to XMLHttpRequest

From: Mark Nottingham <mnot@yahoo-inc.com>
Date: Thu, 13 Apr 2006 16:45:18 -0700
Message-Id: <000E5623-E7AD-43BF-8B43-A87FB8A81683@yahoo-inc.com>
Cc: Maciej Stachowiak <mjs@apple.com>, public-webapi@w3.org
To: Ian Hickson <ian@hixie.ch>

Hi Ian,

Interesting proposal; a few comments inline.

On 2006/04/11, at 1:37 PM, Ian Hickson wrote:
> We could get around even this by requiring two additional headers --
>
>    Request-Nonce: 2957219521
>
> ...in the request, with a matching:
>
>    Response-Nonce: 2957219521
>
> ...in the response, which must match. This would cause issues with
> caches, but would basically make it impossible to cause a server to
> respond unless it actually wants to (we would prevent those two  
> headers
> from being set as well).

This would effectively kill caching, and require two interactions for  
every cross-site request.

>> I think the risk could be mitigated by for instance requiring a  
>> control
>> file in a known location, even if it is nothing more than an on-off
>> switch for the access-control based feature.
>
> Known locations don't work for multiple reasons:
>
>  - Caching means they can't be updated quickly for patching security
>    holes, especially on large sites (which ironically are the most
>    likely to be targetted).

So, give them Cache-Control: max-age: 300. Large sites will still  
have a considerable benefit from caching, and be able to change their  
policies within five minutes (or three minutes, or...). A site with  
an urgent XSS security hole can also deny requests from a particular  
referer... assuming, of course, that it can't be modified in XHR.

>  - They either don't handle common cases like university student sites
>    (cases that are critical given that this is where many new authors
>    start out -- it's in our interests to make it easy for new  
> authors to
>    use these technologies), or, they do handle them but are  
> ridiculously
>    over-complicated.
>
>  - In corporate environments they rely on the site admin being  
> available
>    to fix problems, which may often not be the case, both in large
>    companies spanning multiple time zones, and in small companies  
> where
>    the admin may not be at work.

True.

The right way to do this is with OPTIONS, but there's very poor Web  
server support for it.

>  - They require an additional round-trip in even the simplest case  
> (this
>    is especially bad for the common case of getting an RSS file).

As does this proposal for non-GET methods -- which are many people's  
primary use cases.

>  - They generate 404s that polute access logs.

If people are going to try to abuse a site that doesn't want to allow  
XSS, it's going to show up in the logs somehow.

> Known locations was a design mistake for robots.txt, it was a design
> mistake for favicon.ico, and it was a design mistake for p3p.xml.  
> Let's
> not make a fourth design mistake.

They are, but there isn't any other option on the table yet. This  
proposal has some nice attributes, but it's very complex.

If it were me, I'd be inclined to put a known location in the WD  
(say, /w3c/access-control) in order to get the TAG -- or anybody else  
-- motivated to come up with a better solution. How about an Open  
Source apache module and ISS plug-in to give users control of OPTIONS  
responses?

>> 2) Using GET as a preflight of the access control for other http  
>> methods
>> seems potentially risky. Often, the server-side code for different
>> methods on the same resource will not be that closely related, and
>> indeed, it's possible for content authors not to even be aware that a
>> resource where they are granting access for GET also supports PUT or
>> DELETE or POST.
>
> In practice that is fine, since you have to do extra work to make PUT,
> DELETE or POST do anything.
>
> However, we can adjust the proposal to require the server to  
> confirm that
> it is saying it is ok to do a particular type of response, e.g. by  
> having
> headers:
>
>    Request-Confirm-Method: POST
>
> ...requiring a response of:
>
>    Response-Confirm-Method: POST
>
> ...to be accepted.

By that time the side effects have already happened on the server  
side. Many CGI tools (unfortunately) treat GET query args and POST  
bodies as equivalent, so there will be situations where it's possible  
to craft an attack against a server whereby a GET has side effects.

>> 3) "Domain:" doesn't seem like the greatest name for something that
>> includes more than a DNS domain name, but I guess we can pretend it
>> means "security domain" or something.
>
> I am not in any way attached to any of the names, I don't mind better
> names, e.g. if we want Security-Domain: or whatever.

I personally like Referer-Domain, as it is similar to the existing  
Referer header (in fact, duplicitous, but whatever).

WRT the Access-Control header, it should probably be Content-Access- 
Control, as it's an entity header.

Cheers,

--
Mark Nottingham
mnot@yahoo-inc.com
Received on Thursday, 13 April 2006 23:45:50 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:18:54 GMT