RE: Comments on: Access Control for Cross-site Requests

On Wed, 2 Jan 2008, Close, Tyler J. wrote:
> >
> > This is a very dangerous design. It requires authors to be able to 
> > guarentee that every resource across their entire server is capable of 
> > handling cross-domain requests safely. Security features with the 
> > potential damage of cross-site attacks need to default to a safe state 
> > on a per-resource basis, IMHO.
> 
> Sure, but the question is: "Who's responsibility is it?".

No, that isn't the question. This isn't a blame game; the priority here is 
to guarentee that a user upgrading from one Web browser to another is not 
exposed to new attack vectors, whether that be because of a bug on the 
server that the client happens to expose or whether that be because of a 
bug in the client itself.


> > Furthermore, relying on a 200 OK response would immediately expose all 
> > the sites that return 200 OK for all URIs to cross-site-scripting 
> > attacks.
> 
> Fine, so have the server respond with something unmistakeable. In the 
> extreme, the server could be required to respond with a text entity 
> containing a large random number whose value is specified in the rec.

An unmistakeable handshake like this would certainly be an improvement. 
The next problems is that there are some servers that can be tricked into 
returning particular content, which is how, e.g., Flash's "magic file" 
security mechanism was compromised (with dire consequences). (The 
specifics of Flash's vulnerabilities could be worked around if we were 
careful, but the existence of ways to break that model should inform our 
design and thus I am reluctant to rely on that kind of mechanism.)

This also still leaves the per-server vs per-resource issue mentioned 
below.


> > (There is also the problem that any fixed URI has -- /robots.txt 
> > /w3c/p3p.xml, etc are all considered very bad design from a URI point 
> > of view, as they require an entire domain to always be under the 
> > control of the same person, whereas here we might well have cases 
> > where partitions of a domain are under the control of particular 
> > users, with the different partitions having different policies.)
> 
> In which case it is only necessary that whoever has control of the 
> special URL has coordinated with the other users of the host. These are 
> all arrangements that can be made server side, without exposing the 
> details to clients.

Sadly it is in many cases far easier for server-side authors to negotiate 
changes on the client side than it is for them to get their own server 
administration team to change configurations.


> > Furthermore, there is a desire for a design that can be applied purely 
> > static data where the user has no server-side control whatsoever. With 
> > your proposal, even publishing a single text file or XML file with 
> > some data would require scripting, which seems like a large onus to 
> > put on authors who are quite likely inexperienced in security matters.
> 
> Again, this is server-side setup. Particular servers may well choose to 
> deploy technology much like what this WG has created. We just don't have 
> to say that everyone has to do it that way. We don't need that broad an 
> agreement. These technology choices can be confined to the server-side. 
> We only need a way for client and server to signal the presense of such 
> a mechanism, in particular, declaring that each understands the meaning 
> of the Referer-Root header.

If we have an otherwise static page where the server decides whether or 
not the page is returned based on headers, we lose all caching benefits 
(since everything always has to go back to the server for confirmation). 
If we allowed caching anyway, we would be at risk of the server-side 
misconfiguring the cache headers (a _very_ common problem) and thus 
decisions made for one set of users being exposed to another set, either 
breaking cross-domain scripts unexpectedly, or, more likely, exposing 
sensitive data to hostile first parties.


Your idea, but applied on a per-resource basis, and taking into account 
the issues I've raised above, is basically what the spec now requires. The 
client sends all the information to the third-party site, and the server 
has to send back a magic handshake confirming that it can handle 
cross-site requests. The server gets to make all the decisions. The 
handshake is designed in such a way, however, that the handshake can be 
precomputed and made entirely static, and that all existing servers are 
automatically safe from any new risk.

I don't really understand what you think the current model can't do that 
your proposals can.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Wednesday, 2 January 2008 22:10:05 UTC