Re: CORS should be abandoned

On 10/13/17 10:52 AM, Jack (Zhan, Hua Ping) wrote:
> The context is that 1st.com needs the public resources on
> other sites such as 2nd.com, but do not trust them completely.

That is _part_ of the context, and the _smaller_ part.

The other, bigger, part of the context is that 2nd.com may not trust 
1st.com either.  So 2nd.com needs to have a way to identify which of its 
resources are "public" and which are not.  At the very least, 2nd.com 
therefore needs to know exactly who is making the request for the 
resource so it can decide whether to share it.

The problem is, pre-CORS, that 2nd.com had no way to tell who was making 
the requests, for the following reasons:

* Browsers would send 2nd.com cookies for all requests to 2nd.com, no 
matter the source.
* Browsers do not reliably send Referer headers, and users have privacy 
requirements around not sending them at all in many cases.

CORS solves this problem by providing an explicit Origin header on CORS 
requests that is enforced by the browser to properly represent the "who 
is making this request?" information.

Now it would have been possible to _just_ add the Origin header and 
leave everything else to 2nd.com.  This had three problems; one 
theoretical and two practical.

The theoretical problem is that, as pointed out earlier in this thread, 
default-closed (whitelist) is more secure than default-open (blacklist). 
  Only sending an Origin header would allow individual sites to 
whitelist, but lack of enforcement in the browser itself would mean that 
any site that forgot to take measures would be fully accessible.  That 
means that on a site-to-site basis we would effectively have a blacklist 
solution, not a whitelist one.  Or put another way, there are a lot 
fewer browsers than sites, so making the security decision in the 
browser is better from a whitelist perspective, because fewer places 
have to be changed to do the security check to start with.

The first practical problem, in addition to the above theoretical one, 
was that there was good evidence that people would in fact forget to add 
whitelisting behavior to their sites.  That is, the problem was not 
merely theoretical.  Even now, when they have to take explicit action to 
do it, people mistakenly add "Access-Control-Allow-Origin: *" site-wide 
sometimes.  The sin of omission would have been a lot more common than 
this sin of commission.

The _second_ practical problem, which made the "let the site handle it" 
a complete non-starter was the existence of a huge number of sites that 
already depended on the existing same-origin policy to protect them. 
Silently changing the policy on them and making them all vulnerable 
unless they made changes to how they responded to requests would have 
been highly irresponsible.  Getting them all to make the requisite 
changes was impossible.  This was an important design constraint in this 
space.

> People like Florian Bosh & Tab Atkins were thinking how should they
> instruct browsers to distinguish
> requests for public data such as tickerMSFT from requests for
> secreteData.

Yes, this was one of the design constraints in this space as I point out 
above.

> I think they went too far and off road (while I can only see 1 step
> ahead) from what we are looking for. Clearly, the authorization check
> is the responsibility of 2nd.com, not of browsers.

That's fine in theory _if_ 2nd.com has the information to perform the 
authorization check.  Again, see above.

> Florian Bosh & Tab Atkins never asked how
> the manager of 2nd.com can distinguish requests referred by
> http://1st.com from requests referred by https://2nd.com/entrypoints.
> I would guess they know, and I guess we all know, so let me omit this.

Actually, the problem, per above is that it _can't_ distinguish them 
(pre-CORS).  That is _precisely_ one of the problems here.

> And no security wary manager will allow the authorization
> check of their protected resources to be dependent on browsers.

Empirically, you are wrong.  Sites do this all the time, and rely on the 
same-origin policy implemented in browsers to protect them.

Again, key to this is that browsers send authentication credentials 
based on the target of the request, not the source of the request.

> If this Adobe’s approach is taken, does it bring any new security
> issue?

Yes, it does, see above.  The reason you think it does not is that your 
threat model only considers threads to 1st.com, not to 2nd.com.  But in 
practice there are threats to 2nd.com here too.

> Be noted a.html has access to https://2nd.com/publicData does not
> implies it has access to https://2nd.com/secreteData as 2nd.com has
> authorization check procedure for its secreteData.

I think this is the fundamental disagreement you're having with people. 
As noted above, 2nd.com (1) may not have a way to do this authorization 
check effectively (at least pre-CORS) and (2) in practice it often does 
not, relying on the same-origin policy.

> Don’t tell me that the manager forgot to do so

I will tell you precisely that.  This is exactly what sites do.

You can check this yourself, if you want to; experiments trump 
theorizing every time.  Take an open-source browser (Firefox, Chromium, 
WebKit), modify it to make all CORS checks pass, and then try what 
happens when you log into some sites and then in a different tab/window 
attempt to fetch data from them cross-site.

To get you started, the way to skip CORS altogether in Firefox is to 
just remove the block with the DoCORSChecks call at 
http://searchfox.org/mozilla-central/rev/ed1d5223adcdc07e9a2589ee20f4e5ee91b4df10/dom/security/nsContentSecurityManager.cpp#545-548

> Don’t tell me you think the referrer based authorization check

As noted above, referrers cannot be relied on for anything, because 
browsers can, and are, configured to not send them in various cases.

> I feel the point I want to make is simple, but I found that people can
> not understand me.

I think people did understand you.  That said, the post I'm responding 
to was definitely a lot clearer and more understandable than the earlier 
ones and much higher on technical content as opposed to insults.

But I also think you are having a failure to understand when people tell 
you that your assumptions about security practices do not match the 
actual reality of those practices.  _That_ is the critical 
misunderstanding here, from my point of view.

In a world in which there had never been a same-origin policy, it's 
possible that server-side security practices would have been quite 
different and your proposed approach would have been viable.  I am not 
100% convinced of that, because, again, whitelisting implemented in a 
minimal number of chokepoints has a lot fewer failure modes than 
whitelisting scattered across a large number of implementations.

But we don't live in that world.  We live in a world in which 
same-origin policy existed for a long time, and was quite entrenched in 
people's thinking and development practices before work on what would 
become CORS started.  And in that world, your proposed solution was a 
non-starter, because of the huge number of existing deployments that 
depended on same-origin policy.

I strongly urge you to actually perform the experiment I suggest above: 
change a browser to allow cross-site access instead of enforcing CORS 
checks, and see how many sites are relying on same-origin policy to 
protect their non-public resources.

-Boris

P.S.  This will be my only post in this thread.

Received on Friday, 13 October 2017 15:42:54 UTC