W3C home > Mailing lists > Public > public-webapi@w3.org > April 2006

Re: (XMLHttpRequest 2) Proposal for cross-site extensions to XMLHttpRequest

From: Mark Nottingham <mnot@yahoo-inc.com>
Date: Fri, 14 Apr 2006 10:06:17 -0700
Message-Id: <0E26565D-4728-4B2E-989A-A7DDA851BF17@yahoo-inc.com>
Cc: public-webapi@w3.org
To: Ian Hickson <ian@hixie.ch>

On 2006/04/13, at 10:46 PM, Ian Hickson wrote:
> In any case this is largely academic. In practice there won't be  
> that many
> resources that you'll be accessing cross-site, especially in the  
> course of
> a single session.

That's a big assumption to make. I have use cases where a large  
variety of resources would need to be accessed cross-site.

BTW, would you consider these URIs to have different policies?


> Since in reality the problem is with the response, not the request,  
> I'm
> starting to become of the opinion that there aren't any unsafe  
> methods.

POST, PUT and DELETE are unsafe; are you suggesting that we redefine  
HTTP's concept of safety?

>> As stated before, I'm not sure the existence of one hole justifies  
>> the
>> intentional opening of other holes.
> It's not "one hole". Most of the Web works this way, always has.

I was referring to the ability to do a POST; obviously GET is  
possible through a variety of methods, but that's OK, because it's safe.

>> Not following you; why should other media types be prohibited?  
>> E.g., why
>> can't I POST or PUT some JSON or RDF to another site, if it wants  
>> to let
>> me?
> You can. My point is that the only thing that cross-site  
> XMLHttpRequest
> lets you do (other than reading the data that is returned) which  
> existing
> mechanisms don't let you do, is change the Content-Type header (and  
> other
> HTTP headers). So the only vulnerability we need to worry about is  
> a site
> that only accepts data with a particular type (or with particular  
> headers
> set). Any other service is already "vulnerable". And that's the only
> reason we're doing this GET-before-POST thing.

Sorry, I'm just an HTTP guy, so I'm a bit slow.

The attack I'm concerned about is an attacker write some XHR code in  
a fashion that sends a request that has some side effect on another  
server (say, your bank account). XHR introduces a new attack vector  
here because it sends the request with your cookies; the user doesn't  
have to initiate the interaction.

It's true that it's possible to muck around with script tags and HTML  
forms to send an arbitrary POST without interaction (the "one hole"),  
but the existence of one accidental attack vector isn't justification  
for intentionally creating (and standardising) another bigger one  
(not just POST, but other methods as well).

> Certain users are concerned that referers will let other sites know  
> what
> they are doing, and so disable Referer headers, sometimes at levels  
> that
> the UA has no control over, for example in proxies.
> Also, any request from an HTTPS page to an HTTP page has its Referer
> header removed.
> Thus we need a way to include the pertinent parts -- the domain and  
> the
> protocol -- in the headers, so that the remote site can make an  
> educated
> guess as to the intent of the first party and decide whether or not to
> grant that page access to its data.

Fair enough.

I do wonder how long will it take for the browser preferences and  
proxies to catch up; doubtless some people will want this blocked  
too. I'm reminded of SOAPAction.


Mark Nottingham
Received on Friday, 14 April 2006 17:07:40 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:16:21 UTC