W3C home > Mailing lists > Public > public-webapi@w3.org > April 2006

Re: (XMLHttpRequest 2) Proposal for cross-site extensions to XMLHttpRequest

From: Maciej Stachowiak <mjs@apple.com>
Date: Tue, 11 Apr 2006 02:50:07 -0700
Message-Id: <BD44FEB9-0B62-4D3F-8067-7C4B6D039018@apple.com>
Cc: public-webapi@w3.org
To: Ian Hickson <ian@hixie.ch>

Hi Ian,

Thanks for sending this out.

First, as an intro, I'll say some things that maybe go without  
saying. On the one hand, the ability to do a limited form of cross- 
site XMLHttpRequest is a very useful feature that could allow for  
lots of interesting cross-site interaction. But on the other hand, it  
also implies a pretty significant change to the web security model,  
so any proposals must be reviewed with extreme paranoia.

That said, here's some comments, ranging from potentially serious to  
trivialities about naming.

1) I think the most serious risk with this proposal is against  
dynamic documents that allow header or content injection, either  
accidentally or on purpose for testing. Any CGI script or similar  
that echoes back headers of the requester's choice would  
automatically become an open resource to the whole internet, and I'm  
sure such things must exist for testing at the very least.

So, in itself, that might not be too bad an exploit. You can't get  
the Cookie or Authorization header, or document.cookie, so even if  
you find such a test script on a live server where users have login  
accounts. However, suppose there's a test script that also echoes  
back all the headers it sends in the body, some kind of debug mode  
maybe. Now you have something exploitable. Or suppose this script can  
be persuaded to redirect to the other resource of your choice via a  
Location: header (I suppose restrcting redirects could be patched in  
the proposal though - only final response counts).

The fact that such exploits may exist against content that is today  
useful and not a security risk make me suspicious of the proposal as-is.

I think the risk could be mitigated by for instance requiring a  
control file in a known location, even if it is nothing more than an  
on-off switch for the access-control based feature. Or one could come  
up with fancier central file schemes. But the advantage of this is  
that it gives site admins the opportunity to audit content on their  
sites before they become potentially exposed to cross-site access.  
There may be other ways of achieving this same goal.

2) Using GET as a preflight of the access control for other http  
methods seems potentially risky. Often, the server-side code for  
different methods on the same resource will not be that closely  
related, and indeed, it's possible for content authors not to even be  
aware that a resource where they are granting access for GET also  
supports PUT or DELETE or POST.

3) "Domain:" doesn't seem like the greatest name for something that  
includes more than a DNS domain name, but I guess we can pretend it  
means "security domain" or something.

4) access-control PI has a somewhat odd security model. Some allows  
are processed before some denies, based on a fairly complex model of  
specificity, and pretty much ignoring the order of the rules  
specified. It would be better to do something simpler, like denies  
take precedence over allows always, rules take precedence in their  
specified order, etc. Right now there are 8 steps to interpreting the  
access-control rules, which seems too complex for something that sets  
a security policy. Obviously this is fixable without hitting at the  
heart of the proposal in any case.

Those are my main comments. I actually think #1 is fairly serious,  
even though it may sound like a quibbling corner case.

Received on Tuesday, 11 April 2006 16:03:13 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:16:21 UTC