- From: Cameron Jones <cmhjones@gmail.com>
- Date: Thu, 19 Jul 2012 13:07:26 +0100
- To: Henry Story <henry.story@bblfish.net>, Ian Hickson <ian@hixie.ch>
- Cc: public-webapps <public-webapps@w3.org>, public-webappsec@w3.org
On Wed, Jul 18, 2012 at 4:41 AM, Henry Story <henry.story@bblfish.net> wrote: > And it is the experience of this being required that led me to build a CORS proxy [1] - (I am not the first to write one, I add quickly) Yes, the Origin and unauthenticated CORS restrictions are trivially circumvented by a simple proxy. > > So my argument is that this restriction could be lifted since > > 1. GET is indempotent - and should not affect the resource fetched HTTP method semantics are an obligation for conformance and not guaranteed technically. Any method can be mis-used for any purpose from a security point of view. The people at risk from the different method semantics are those who use them incorrectly, for example a bank which issues transactions using GET over a URI: http://dontbankonus.com/transfer?to=xyz&amount=100 > 2. If there is no authentication, then the JS Agent could make the request via a CORS praxy of its choosing, and so get the content of the resource anyhow. Yes, the restriction on performing an unauthenticated GET only serves to promote the implementation of 3rd party proxy intermediaries and, if they become established, will introduce new security issues by way of indirection. The pertinent question for cross-origin requests here is - who is authoring the link and therefore in control of the request? The reason that cross-origin js which executes 3rd party non-origin code within a page is not a problem for web security is that the author of the page must explicitly include such a link. The control is within the author's domain to apply prudence on what they link to and include from. Honorable sites with integrity seek to protect their integrity by maintaining bona-fide links to trusted and reputable 3rd parties. > 3. One could still pass the Origin: header as a warning to sites who may be tracking people in unusual ways. This is what concerns people about implementing a proxy - essentially you are circumventing a recommended security practice whereby sites use this header as a means of attempting to protect themselves from CSRF attacks. This is futile and these sites would do better to implement CSRF tokens which is the method used by organizations which must protect against online fraud with direct financial implications, ie your bank. There are too many recommendations for protecting against CRSF and the message is being lost. On the reverse, the poor uptake of CORS is because people do not understand it and are wary of implementing anything which they regard as a potential for risk if they get it wrong. > Lifting this restriction would make a lot of public data available on the web for use by JS agents cleanly. Where requests require authentication or are non-indempotent CORS makes a lot of sense, and those are areas where data publishes would need to be aware of CORS anyway, and should implement it as part of a security review. But for people publishing open data, CORS should not be something they need to consider. > The restriction is in place as the default method of cross-origin requests prior to XHR applied HTTP auth and cookies without restriction. If this were extended in the same manner to XHR it would allow any page to issue scripted authenticated requests to any site you have visited within the lifetime of your browsing application session. This would allow seemingly innocuous sites to do complex multi-request CSRF attacks as background processes and against as many targets as they can find while you're on the page. The more sensible option is to make all XHR requests unauthenticated unless explicitly scripted for such operation. A request to a public IP address which carries no user-identifiable information is completely harmless by definition. On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson <ian@hixie.ch> wrote: > No, such a proxy can't get to intranet pages. > > "Authentication" on the Internet can include many things, e.g. IP > addresses or mere connectivity, that are not actually included in the body > of an HTTP GET request. It's more than just cookies and HTTP auth headers. The vulnerability of unsecured intranets can be eliminated by applying the restriction to private IP ranges which is the source of this attack vector. It is unsound (and potentially legally disputable) for public access resources to be restricted and for public access providers to pay the costs for the protection of private resources. It is the responsibility of the resource's owner to pay the costs of enforcing their chosen security policies. Thanks, Cameron Jones
Received on Thursday, 19 July 2012 12:07:54 UTC