Re: Comments on: Access Control for Cross-site Requests

A few thoughts, FWIW;

On 02/01/2008, at 1:00 AM, Ian Hickson wrote:

>
> On Mon, 31 Dec 2007, Close, Tyler J. wrote:
>>
>> 1. Browser detects a cross-domain request
>> 2. Browser sends GET request to /1234567890/Referer-Root
>> 3. If server responds with a 200:
>>    - let through the cross-domain request, but include a Referer-Root
>> header. The value of the Referer-Root header is the relative URL /,
>> resolved against the URL of the page that produced the request. HTTP
>> caching mechanisms should be used on the GET request of step 2.
>> 4. If the server does not respond with a 200, reject the cross-domain
>> request.
>
> This is a very dangerous design. It requires authors to be able to
> guarentee that every resource across their entire server is capable of
> handling cross-domain requests safely. Security features with the
> potential damage of cross-site attacks need to default to a safe  
> state on
> a per-resource basis, IMHO.

Agreed. Some sort of map of the server is needed with this approach.

> Furthermore, relying on a 200 OK response would immediately expose  
> all the
> sites that return 200 OK for all URIs to cross-site-scripting attacks.
> (The high-profile case of the Acid2 test's 404 page returning a 200 OK
> recently should caution us against assuming that sites are all  
> currently
> safe in this regard -- if even the Web Standards Project can run into
> issues like this, what hope is there for everyone else?)

The point above makes this largely moot -- something more than just  
200 OK is necessary.

> (There is also the problem that any fixed URI has -- /robots.txt
> /w3c/p3p.xml, etc are all considered very bad design from a URI  
> point of
> view, as they require an entire domain to always be under the  
> control of
> the same person, whereas here we might well have cases where  
> partitions of
> a domain are under the control of particular users, with the different
> partitions having different policies.)

And yet the W3C has already published one Recommendation using a well- 
known location. I'd suggest that this WG defer the decision of how to  
locate site-wide metadata to the TAG or some other body, or to reuse  
one of the existing mechanisms if they can't pull their collective  
finger out. Inventing yet another way to do the same thing isn't good  
for anybody.


> Furthermore, there is a desire for a design that can be applied purely
> static data where the user has no server-side control whatsoever. With
> your proposal, even publishing a single text file or XML file with  
> some
> data would require scripting, which seems like a large onus to put on
> authors who are quite likely inexperienced in security matters.


How so?

Received on Thursday, 3 January 2008 01:27:25 UTC