- From: Jonas Sicking <jonas@sicking.cc>
- Date: Tue, 10 Jun 2008 16:41:41 -0700
- To: Jonas Sicking <jonas@sicking.cc>, "WAF WG (public)" <public-appformats@w3.org>, public-webapps@w3.org
Thomas Roessler wrote: > On 2008-05-27 18:10:16 -0700, Jonas Sicking wrote: > >> While it's true that servers need to be prepared for any type of >> HTTP requests already, access-control makes it possible to do >> them using other users peoples credentials. > >> So while we don't need to worry about "bad things happen when >> this HTTP request is sent", we do need to worry about "bad things >> can happen when this HTTP request is sent by a user with root >> credentials". > >> Getting access to a users cookie information is no small task. > > I disagree. There are any numbers of real-world scenarios in which > cookies are regularly leaked - JavaScript that's loaded from > untrusted sources, and captive portals are just two examples which > make people bleed cookies. Basing the design here on the premise > that cookie-based authentication should somehow be enough to protect > against cross-site request forgery strikes me as unwise, in > particular when the cost is in additional complexity (and therefore > risk). Well, if you can get access to a users cookies and auth information then nothing that we do here matters at all. Or at least matters to a much much smaller extent. This whole spec is basically here precisely to protect the information that is protected by cookies and auth headers (and for most sites only cookies). The only additional information we're trying to protect is content behind firewalls. > As I said before, I'd prefer us to try to keep things simple; seeing > the way this evolves, I'm increasingly convinced that we're indeed > pushing too much of the enforcement into the client. I do realized that I'm biased here since I'm an implementor for the client. However I trust more that clients will get this right than I trust servers to get it right. Paritally because there are much fewer clients (in the order of 10 clients with a large user base) than there are servers (in the order of million servers with a large user base). >> However I'm not sure I agree that the "matrix" is getting very >> big here. In fact, I would argue that these headers make the >> matrix much smaller. The matrix that server authors have to worry >> about is the matrix of the possible requests Access-Control >> enables. As it currently stands the server operator have to worry >> about a very big matrix, i.e. the full set of possible headers, >> and of possible methods. These new proposed headers can only >> reduce that matrix. >> >>> I'd propose to *either* define a policy language that takes these >>> additional degrees of freedom into account, *if* they are really >>> needed. *Or*, lets' go away from the policy language, and try to >>> model the scope of authorization for the specific client-server >>> pair. > >> I'm not quite following here. Please elaborate. > > I was getting at the fact that the recent change proposals are > moving away from scenarios in which the web application author > writes a policy in a simple language, and get more and more into a > scenario in which this policy is broken into pieces that are spread > across multiple headers. > > I think the sane options are either an extension to the policy > language (so that there is one model to look up), e.g.: > > allow method post with cookies with http-auth oauth from w3.org \ > except people.w3.org I'm less concerned about what exact syntax we use than what features we are providing. As long as it is easy to understand. That said, I do want to avoid the situation we had before where you had to specify in far too great detail your policy, such as for each allowed site separately list the allowed methods for that site. Other than that I don't think it matters much, it's just a syntactic question. > ... or a model where we do away with the notion of a policy language > entirely, and rely on the Access-Control-Origin (or whatever it's > called this week) and the server making a decision as our main > mechanism. This goes back to Mark Nottingham's ideas from January, > when he suggested that using HTTP Vary headers and Referer-Root was > indeed enough. > > For an unsafe method, the server indicates that it's prepared to > deal with cross-site requests during the pre-flight phase, and then > makes its real decision at the time of the request, based on that > request's Access-Control-Origin. This is still relying on that the server is able to deal with the unsafe request. I.e. the server is still forced to opt in to nothing, or opt in to all possible combinations of http methods and headers. So it seems no different from what the spec says today in that regard. > For safe methods, the server can just use HTTP error codes. > > While this is indeed a differnt model, it seems to be the one that a > lot of the discussion here is edging toward -- e.g., the defenses > against decoupling the preflight check from the "real" request all > rely on server-side enforcement of the policy. > > What I'd really like to see us avoid is a scenario in which we're > creating a mess by mixing the various models to the extent that the > entire model becomes incomprehensible. I don't think the model is especially complicated for the server administrator, even with the separate headers. It seems to me that the number of headers is a poor measurement of the complexity of the spec. >> My concern is the people that do want to use Access-Control for >> cross-site data transfer (anyone else is of no concern since they >> won't opt in). > >> These people will have intimately know how the server reacts to >> the full set of possible headers and of possible methods. I've >> certainly never known that for any of the servers where I've put >> content. > >> The smaller portions a server opts in to, the smaller the risk >> that they accidentally opts in to something where they >> accidentally shoot themselves in the foot. > > While I sympathize with that notion, I think that the current > approach (mixing a policy language with headers that possibly need > to be set differently for different sites) is likely to mess up > things further and make analysis harder. > > I do think that we server site authors best by making things simple, > easy and consistent. That isn't the case with the model getting > ever more baroque. Sorry. I guess it depends on what you define as "simple". I think of it as the fewer things you have to keep track of the simpler it is. I think having to keep track of the full set of http features for the server is more than making sure to get one or two extra headers right. / Jonas
Received on Tuesday, 10 June 2008 23:45:08 UTC