Re: (XMLHttpRequest 2) Proposal for cross-site extensions to XMLHttpRequest

On 2006/04/13, at 5:06 PM, Ian Hickson wrote:
>> The right way to do this is with OPTIONS, but there's very poor Web
>> server support for it.
>
> OPTIONS is one of those features that was very nice in theory but  
> is dead
> in practice. We can't realistically rely on it.

I know :(

>> This proposal has some nice attributes, but it's very complex.
>
> I think it is simpler (a lot simpler) than a system based on a known
> location, with its own file format, etc. This is especially the  
> case given
> that we need <?access-control?> anyway -- my proposal is just an
> additional set of rules for how to restrict XMLHttpRequest in a way  
> that
> relies on that separate spec, and is describable in a few paragraphs.

Well, this proposal makes some trade-offs, relative to a known  
location / site-wide policy. It enforces a one-to-one resource-to- 
policy relationship, which allows the scenarios you mention (e.g.,  
many authors on one server, administrative access issues) but it  
incurs considerable overhead for sites that *do* have homogenous  
policies and/or a single administrator; each distinct unsafe request  
will require a separate policy request, and the policy will need to  
be co-ordinated across a potentially large number of resources.

That will in turn encourage such sites to offer very coarse-grained  
resources, overload content negotation, misuse GET or play other  
tricks to avoid the extra round trips, which is a step in the wrong  
direction IMO.

What about allowing in-content / in-header policy for safe methods,  
and going to a well-known location for unsafe methods?

>> If it were me, I'd be inclined to put a known location in the WD  
>> (say,
>> /w3c/access-control) in order to get the TAG -- or anybody else --
>> motivated to come up with a better solution.
>
> That's a very dangerous way of designing specs. You are most likely  
> to end
> up having implementations of your straw man.

Perhaps. I'm concerned about proliferating models and means of  
attachment of Web metadata; this, the content labels work, Web  
description, etc. It would be good if every WG didn't invent a  
different way of doing this. The well-known location cat is already  
out of the bag; this is a new and unknown beast.

>> By that time the side effects have already happened on the server  
>> side.
>> Many CGI tools (unfortunately) treat GET query args and POST  
>> bodies as
>> equivalent, so there will be situations where it's possible to  
>> craft an
>> attack against a server whereby a GET has side effects.
>
> This is out of scope for this proposal since it is already possible  
> to do
> both GET and POST submissions to arbitrary URIs without any protection
> whatsoever.

OK (assuming you're referring to script tags and the like). As stated  
before, I'm not sure the existence of one hole justifies the  
intentional opening of other holes.

> The XMLHttpRequest cross-site protection only needs to protect
> against two things:
>
>  1. Actually reading the data that is returned, and
>
>  2. Sending of request entity body payloads that are MIME types  
> other than
>     text/plain, multipart/form-data, application/x-www-form- 
> urlencoded,
>     and application/x-www-form+xml.
>
> The second is only to protect against hypothetical servers that are
> actually checking the Content-Type of submissions. In practice I doubt
> it'll make the slightest difference.

Not following you; why should other media types be prohibited? E.g.,  
why can't I POST or PUT some JSON or RDF to another site, if it wants  
to let me?

> My proposal actually protects more than that, it protects against  
> reading
> the returned data and _any_ entity payloads. This is overkill, but  
> makes
> the model simpler. (The extra roundtrip is only required for the  
> second of
> these, which is probably overkill. We could probably drop it.)

Again, not following you; I thought the point of the second round  
trip was to avoid any undesired state changes / side effects on the  
server. If you're saying that it's OK for XHR to do an exploratory  
POST, PUT or DELETE against a server to see if the results have  
access-control information in them, I can't agree.

>>>> 3) "Domain:" doesn't seem like the greatest name for something that
>>>> includes more than a DNS domain name, but I guess we can pretend it
>>>> means "security domain" or something.
>>>
>>> I am not in any way attached to any of the names, I don't mind  
>>> better
>>> names, e.g. if we want Security-Domain: or whatever.
>>
>> I personally like Referer-Domain, as it is similar to the existing
>> Referer header (in fact, duplicitous, but whatever).
>
> Referer has path information, which is a privacy problem; Referer- 
> Domain
> would only include the domain, to get around this. (And the scheme, to
> allow for checks against DNS spoofing, but that's a minor detail.)

Could you go into that a bit more deeply? The site with control over  
the cross-site request is the same party that controls how the  
Referer is constructed (by controlling how their URIs are laid out),  
so what's the exact concern here? How is this different from a normal  
link between sites?

There's also a bit of asymmetry here with the goals you stated  
earlier; you wanted to allow people who didn't control a whole site  
to set access control, but Domain doesn't allow the target of that  
XHR request to identify the resource accessing it beyond the site  
it's on.

Cheers,

--
Mark Nottingham
mnot@yahoo-inc.com

Received on Friday, 14 April 2006 01:45:03 UTC