Re: (XMLHttpRequest 2) Proposal for cross-site extensions to XMLHttpRequest

On Tue, 11 Apr 2006, Maciej Stachowiak wrote:
> 
> 1) I think the most serious risk with this proposal is against dynamic 
> documents that allow header or content injection, either accidentally or 
> on purpose for testing. Any CGI script or similar that echoes back 
> headers of the requester's choice would automatically become an open 
> resource to the whole internet, and I'm sure such things must exist for 
> testing at the very least.

My bad, I forgot one restriction that we must also add -- it must be 
impossible for setRequestHeader() to set an Access-Control header.

With this, the only way I can see to cause a script to emit an 
Access-Control header when it shouldn't is if the script takes GET 
requests and extracts headers from the URI to reply back with custom 
headers, those headers including Access-Control.

We could get around even this by requiring two additional headers -- 

   Request-Nonce: 2957219521

...in the request, with a matching:

   Response-Nonce: 2957219521

...in the response, which must match. This would cause issues with 
caches, but would basically make it impossible to cause a server to 
respond unless it actually wants to (we would prevent those two headers 
from being set as well).


> So, in itself, that might not be too bad an exploit. You can't get the 
> Cookie or Authorization header, or document.cookie, so even if you find 
> such a test script on a live server where users have login accounts. 
> However, suppose there's a test script that also echoes back all the 
> headers it sends in the body, some kind of debug mode maybe. Now you 
> have something exploitable.

Your script is getting somewhat complex now -- it needs to take GET query 
parameters and convert them into HTTP headers and to echo all its headers 
into the body as well. Does this ever happen? I've written echo scripts 
myself but I can't think of any that are vulnerable here.


> Or suppose this script can be persuaded to redirect to the other 
> resource of your choice via a Location: header (I suppose restrcting 
> redirects could be patched in the proposal though - only final response 
> counts).

Yes, I assumed that the current redirect rules are applied before my 
changes take effect. This would have to be made clear.


> I think the risk could be mitigated by for instance requiring a control 
> file in a known location, even if it is nothing more than an on-off 
> switch for the access-control based feature.

Known locations don't work for multiple reasons:

 - Caching means they can't be updated quickly for patching security
   holes, especially on large sites (which ironically are the most
   likely to be targetted).
 - They either don't handle common cases like university student sites 
   (cases that are critical given that this is where many new authors 
   start out -- it's in our interests to make it easy for new authors to
   use these technologies), or, they do handle them but are ridiculously
   over-complicated.
 - In corporate environments they rely on the site admin being available
   to fix problems, which may often not be the case, both in large 
   companies spanning multiple time zones, and in small companies where 
   the admin may not be at work.
 - They require an additional round-trip in even the simplest case (this
   is especially bad for the common case of getting an RSS file).
 - They generate 404s that polute access logs.

Known locations was a design mistake for robots.txt, it was a design 
mistake for favicon.ico, and it was a design mistake for p3p.xml. Let's 
not make a fourth design mistake.


> 2) Using GET as a preflight of the access control for other http methods 
> seems potentially risky. Often, the server-side code for different 
> methods on the same resource will not be that closely related, and 
> indeed, it's possible for content authors not to even be aware that a 
> resource where they are granting access for GET also supports PUT or 
> DELETE or POST.

In practice that is fine, since you have to do extra work to make PUT, 
DELETE or POST do anything.

However, we can adjust the proposal to require the server to confirm that 
it is saying it is ok to do a particular type of response, e.g. by having 
headers:

   Request-Confirm-Method: POST

...requiring a response of:

   Response-Confirm-Method: POST

...to be accepted.


> 3) "Domain:" doesn't seem like the greatest name for something that 
> includes more than a DNS domain name, but I guess we can pretend it 
> means "security domain" or something.

I am not in any way attached to any of the names, I don't mind better 
names, e.g. if we want Security-Domain: or whatever.


> 4) access-control PI has a somewhat odd security model. Some allows are 
> processed before some denies, based on a fairly complex model of 
> specificity, and pretty much ignoring the order of the rules specified. 
> It would be better to do something simpler, like denies take precedence 
> over allows always, rules take precedence in their specified order, etc. 
> Right now there are 8 steps to interpreting the access-control rules, 
> which seems too complex for something that sets a security policy. 
> Obviously this is fixable without hitting at the heart of the proposal 
> in any case.

The idea is that the security model of Access Control is a modular piece 
independent of the actual XMLHttpRequest cross-site mechanism, so we can 
work on one independently of the other.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Tuesday, 11 April 2006 20:55:47 UTC