W3C home > Mailing lists > Public > public-appformats@w3.org > January 2008

RE: Comments on: Access Control for Cross-site Requests

From: Close, Tyler J. <tyler.close@hp.com>
Date: Wed, 2 Jan 2008 23:29:12 +0000
To: Ian Hickson <ian@hixie.ch>
CC: Anne van Kesteren <annevk@opera.com>, "public-appformats@w3.org" <public-appformats@w3.org>
Message-ID: <C7B67062D31B9E459128006BAAD0DC3D10C4EC04@G6W0269.americas.hpqcorp.net>

Hi Ian,

Ian Hickson wrote:
> On Wed, 2 Jan 2008, Close, Tyler J. wrote:
> > >
> > > This is a very dangerous design. It requires authors to be able to
> > > guarentee that every resource across their entire server
> is capable of
> > > handling cross-domain requests safely. Security features with the
> > > potential damage of cross-site attacks need to default to
> a safe state
> > > on a per-resource basis, IMHO.
> >
> > Sure, but the question is: "Who's responsibility is it?".
>
> No, that isn't the question. This isn't a blame game;

I'm not suggesting a blame game. I'm trying to help this WG with distributed application design by showing the benefits of putting program logic in the right place. In this case, we can significantly reduce the complexity of the network protocol by having the server which hosts resources take responsibility for controlling access to those resources. Consequently, we don't need to standardize a policy language for expressing an access control policy, since the policy is never communicated across the network. In addition to reducing complexity, this design also makes it easier for a server to interact with an unknown client, since the server doesn't have to trust the client with enforcing the server's access control policy.

> the
> priority here is
> to guarentee that a user upgrading from one Web browser to
> another is not
> exposed to new attack vectors, whether that be because of a bug on the
> server that the client happens to expose or whether that be
> because of a
> bug in the client itself.

Reducing the complexity of the client is a very good way to improve the chances that upgrading from one Web browser to another will not expose new attack vectors. You don't need to legislate server-side design to achieve that goal, and arguably should not be trying to. Flexibility and simplicity are valuable characteristics in a network protocol.

> returning particular content, which is how, e.g., Flash's "magic file"
> security mechanism was compromised (with dire consequences).

Could you provide a link?

> Sadly it is in many cases far easier for server-side authors
> to negotiate
> changes on the client side than it is for them to get their own server
> administration team to change configurations.

I suspect this goes back to our discussion on how to think about the 40% market share commanded by IE6.

> If we have an otherwise static page where the server decides
> whether or
> not the page is returned based on headers, we lose all
> caching benefits
> (since everything always has to go back to the server for
> confirmation).
> If we allowed caching anyway, we would be at risk of the server-side
> misconfiguring the cache headers (a _very_ common problem) and thus
> decisions made for one set of users being exposed to another
> set, either
> breaking cross-domain scripts unexpectedly, or, more likely, exposing
> sensitive data to hostile first parties.

This might be a case of "swallowed the spider to catch the fly". I don't think we should be making this protocol more complex in an attempt to compensate for a poor configuration interface in some HTTP servers. If you'ld like to build a lint-like tool, that's a separate and worthwhile pursuit.

> Your idea, but applied on a per-resource basis, and taking
> into account
> the issues I've raised above, is basically what the spec now
> requires. The
> client sends all the information to the third-party site, and
> the server
> has to send back a magic handshake confirming that it can handle
> cross-site requests. The server gets to make all the decisions. The
> handshake is designed in such a way, however, that the
> handshake can be
> precomputed and made entirely static, and that all existing
> servers are
> automatically safe from any new risk.
>
> I don't really understand what you think the current model
> can't do that
> your proposals can.

Just "be simple". We only needed the client and server to agree on a single bit: "Do you understand the Referer-Root header?" Yet somehow, we've ended up with an entire policy language with both positive and negative statements.

--Tyler
Received on Wednesday, 2 January 2008 23:54:06 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:56:21 UTC