RE: Comments on: Access Control for Cross-site Requests

Hi Mark,

Seem to be running into you a lot lately. Just one question...

Mark Nottingham wrote:
> On 02/01/2008, at 1:00 AM, Ian Hickson wrote:
> > On Mon, 31 Dec 2007, Close, Tyler J. wrote:
> >>
> >> 1. Browser detects a cross-domain request
> >> 2. Browser sends GET request to /1234567890/Referer-Root
> >> 3. If server responds with a 200:
> >>    - let through the cross-domain request, but include a
> Referer-Root
> >> header. The value of the Referer-Root header is the relative URL /,
> >> resolved against the URL of the page that produced the
> request. HTTP
> >> caching mechanisms should be used on the GET request of step 2.
> >> 4. If the server does not respond with a 200, reject the
> cross-domain
> >> request.
> >
> > This is a very dangerous design. It requires authors to be able to
> > guarentee that every resource across their entire server is
> capable of
> > handling cross-domain requests safely. Security features with the
> > potential damage of cross-site attacks need to default to a safe
> > state on
> > a per-resource basis, IMHO.
>
> Agreed. Some sort of map of the server is needed with this approach.

The above comment leaves me with the impression that you too think the client should be enforcing the server's access control policy. I just find this a really strange position. Ian seems to support this view with the perspective that client developers will deploy better software faster than server developers. Is that also your rationale?

Please keep in mind that a positive response to the GET request of step 2 just means that the server admin is saying: "Yes, I've setup some server-side software to control cross-domain requests". It doesn't mean: "I'm blindly letting through all cross-domain requests, my users be damned!", as some seem to be implying.

--Tyler

Received on Thursday, 3 January 2008 02:18:41 UTC