- From: Thomas Roessler <tlr@w3.org>
- Date: Tue, 10 Jun 2008 09:52:04 +0200
- To: Jonas Sicking <jonas@sicking.cc>
- Cc: "WAF WG (public)" <public-appformats@w3.org>, public-webapps@w3.org
On 2008-05-27 18:10:16 -0700, Jonas Sicking wrote: > While it's true that servers need to be prepared for any type of > HTTP requests already, access-control makes it possible to do > them using other users peoples credentials. > So while we don't need to worry about "bad things happen when > this HTTP request is sent", we do need to worry about "bad things > can happen when this HTTP request is sent by a user with root > credentials". > Getting access to a users cookie information is no small task. I disagree. There are any numbers of real-world scenarios in which cookies are regularly leaked - JavaScript that's loaded from untrusted sources, and captive portals are just two examples which make people bleed cookies. Basing the design here on the premise that cookie-based authentication should somehow be enough to protect against cross-site request forgery strikes me as unwise, in particular when the cost is in additional complexity (and therefore risk). As I said before, I'd prefer us to try to keep things simple; seeing the way this evolves, I'm increasingly convinced that we're indeed pushing too much of the enforcement into the client. >> Assuming this proposal was accepted, we'd be getting to a point >> where the combination of Vary, policies, and a ton of headers >> can lead to an almost arbitrarily complex matrix that encodes >> the decisions, and is cached on the client. > I don't think this adds that much complexity to be honest. First > of all Access-Control-Extra-Headers is likely rarely going to be > used. In most cases there will be no need to set custom headers. > In fact I would happily remove the ability to set headers from > the first version of Access-Control. Other than the small set on > the white-list. It doesn't matter whether that feature is rarely used. The site admin needs to keep (a) their policy, (b) the additional header settings, and (c) the interaction between them in their mind, or else they're going to shoot their own feet. > Access-Control-Methods is worse as it would fairly often have to > be used. indeed. > However I'm not sure I agree that the "matrix" is getting very > big here. In fact, I would argue that these headers make the > matrix much smaller. The matrix that server authors have to worry > about is the matrix of the possible requests Access-Control > enables. As it currently stands the server operator have to worry > about a very big matrix, i.e. the full set of possible headers, > and of possible methods. These new proposed headers can only > reduce that matrix. > >> I'd propose to *either* define a policy language that takes these >> additional degrees of freedom into account, *if* they are really >> needed. *Or*, lets' go away from the policy language, and try to >> model the scope of authorization for the specific client-server >> pair. > I'm not quite following here. Please elaborate. I was getting at the fact that the recent change proposals are moving away from scenarios in which the web application author writes a policy in a simple language, and get more and more into a scenario in which this policy is broken into pieces that are spread across multiple headers. I think the sane options are either an extension to the policy language (so that there is one model to look up), e.g.: allow method post with cookies with http-auth oauth from w3.org \ except people.w3.org ... or a model where we do away with the notion of a policy language entirely, and rely on the Access-Control-Origin (or whatever it's called this week) and the server making a decision as our main mechanism. This goes back to Mark Nottingham's ideas from January, when he suggested that using HTTP Vary headers and Referer-Root was indeed enough. For an unsafe method, the server indicates that it's prepared to deal with cross-site requests during the pre-flight phase, and then makes its real decision at the time of the request, based on that request's Access-Control-Origin. For safe methods, the server can just use HTTP error codes. While this is indeed a differnt model, it seems to be the one that a lot of the discussion here is edging toward -- e.g., the defenses against decoupling the preflight check from the "real" request all rely on server-side enforcement of the policy. What I'd really like to see us avoid is a scenario in which we're creating a mess by mixing the various models to the extent that the entire model becomes incomprehensible. > My concern is the people that do want to use Access-Control for > cross-site data transfer (anyone else is of no concern since they > won't opt in). > These people will have intimately know how the server reacts to > the full set of possible headers and of possible methods. I've > certainly never known that for any of the servers where I've put > content. > The smaller portions a server opts in to, the smaller the risk > that they accidentally opts in to something where they > accidentally shoot themselves in the foot. While I sympathize with that notion, I think that the current approach (mixing a policy language with headers that possibly need to be set differently for different sites) is likely to mess up things further and make analysis harder. I do think that we server site authors best by making things simple, easy and consistent. That isn't the case with the model getting ever more baroque. Sorry. -- Thomas Roessler, W3C <tlr@w3.org>
Received on Tuesday, 10 June 2008 12:25:18 UTC