Re: Why the restriction on unauthenticated GET in CORS?

On 19 Jul 2012, at 14:07, Cameron Jones wrote:

> On Wed, Jul 18, 2012 at 4:41 AM, Henry Story <henry.story@bblfish.net> wrote:
>> And it is the experience of this being required that led me to build a CORS proxy [1] - (I am not the first to write one, I add quickly)
> 
> Yes, the Origin and unauthenticated CORS restrictions are trivially
> circumvented by a simple proxy.
> 
>> 
>> So my argument is that this restriction could be lifted since
>> 
>> 1. GET is indempotent - and should not affect the resource fetched

I have to correct myself here: GET and HEAD are nullipotent (they have no sideffects
and the result is the same if they are executed 0 or more times) whereas PUT and DELETE
(with GET and HEAD) are indempotent ( they have the same result when executed 1 or more times). 

> 
> HTTP method semantics are an obligation for conformance and not
> guaranteed technically. Any method can be mis-used for any purpose
> from a security point of view.
> 
> The people at risk from the different method semantics are those who
> use them incorrectly, for example a bank which issues transactions
> using GET over a URI:
> http://dontbankonus.com/transfer?to=xyz&amount=100

yes, that is of course their problem, and one should not design to help people who do silly
things like that. 

> 
>> 2. If there is no authentication, then the JS Agent could make the request via a CORS praxy of its choosing, and so get the content of the resource anyhow.
> 
> Yes, the restriction on performing an unauthenticated GET only serves
> to promote the implementation of 3rd party proxy intermediaries and,
> if they become established, will introduce new security issues by way
> of indirection.
> 
> The pertinent question for cross-origin requests here is - who is
> authoring the link and therefore in control of the request? The reason
> that cross-origin js which executes 3rd party non-origin code within a
> page is not a problem for web security is that the author of the page
> must explicitly include such a link. The control is within the
> author's domain to apply prudence on what they link to and include
> from. Honorable sites with integrity seek to protect their integrity
> by maintaining bona-fide links to trusted and reputable 3rd parties.

yes, though in the case of a JS based linked data application, like the semi-functionaing one I wrote and described earlier 
  http://bblfish.github.com/rdflib.js/example/people/social_book.html
( not all links work, you can click on "Tim Berners Lee", and a few others )
the original javascript is not fetching more javascript, but fetching more data from the web.
Still your point remains valid. That address book needs to find ways to help show who says what, and of course not just upload any JS it finds on the web or else its reputation will suffer. My CORS proxy
only uploads RDFizable data.

> 
>> 3. One could still pass the Origin: header as a warning to sites who may be tracking people in unusual ways.
> 
> This is what concerns people about implementing a proxy - essentially
> you are circumventing a recommended security practice whereby sites
> use this header as a means of attempting to protect themselves from
> CSRF attacks. This is futile and these sites would do better to
> implement CSRF tokens which is the method used by organizations which
> must protect against online fraud with direct financial implications,
> ie your bank.

I was suggesting the browser still pass the "Origin:" header even on a
request to a non authenticated page, for informational reasons. 

> 
> There are too many recommendations for protecting against CRSF and the
> message is being lost. On the reverse, the poor uptake of CORS is
> because people do not understand it and are wary of implementing
> anything which they regard as a potential for risk if they get it
> wrong.
> 
>>  Lifting this restriction would make a lot of public data available on the web for use by JS agents cleanly. Where requests require authentication or are non-nullipotent CORS makes a lot of sense, and those are areas where data publishes would need to be aware of CORS anyway, and should implement it as part of a security review. But for people publishing open data, CORS should not be something they need to consider.
>> 
> 
> The restriction is in place as the default method of cross-origin
> requests prior to XHR applied HTTP auth and cookies without
> restriction. If this were extended in the same manner to XHR it would
> allow any page to issue scripted authenticated requests to any site
> you have visited within the lifetime of your browsing application
> session. This would allow seemingly innocuous sites to do complex
> multi-request CSRF attacks as background processes and against as many
> targets as they can find while you're on the page.

indeed. Hence my suggestion that this restriction only be lifted for nullipotent and non
authenticated requests.

> The more sensible option is to make all XHR requests unauthenticated
> unless explicitly scripted for such operation. A request to a public
> IP address which carries no user-identifiable information is
> completely harmless by definition.

yep. we agree.

> 
> On Wed, Jul 18, 2012 at 4:47 AM, Ian Hickson <ian@hixie.ch> wrote:
>> No, such a proxy can't get to intranet pages.
>> 
>> "Authentication" on the Internet can include many things, e.g. IP
>> addresses or mere connectivity, that are not actually included in the body
>> of an HTTP GET request. It's more than just cookies and HTTP auth headers.
> 
> The vulnerability of unsecured intranets can be eliminated by applying
> the restriction to private IP ranges which is the source of this
> attack vector. It is unsound (and potentially legally disputable) for
> public access resources to be restricted and for public access
> providers to pay the costs for the protection of private resources. It
> is the responsibility of the resource's owner to pay the costs of
> enforcing their chosen security policies.

Thanks a lot for this suggestion. Ian Hickson's argument had convinced me, but you have just provided a clean answer to it.

If a mechanism can be found to apply restrictions for private IP ranges then that should be used in preference to forcing the rest of the web to implement CORS restrictions on public data. And indeed the firewall servers use private ip ranges, which do in fact make a good distinguisher for public and non public space. 

So the proposal is still alive it seems :-)

> 
> Thanks,
> Cameron Jones

Social Web Architect
http://bblfish.net/

Received on Thursday, 19 July 2012 12:44:13 UTC