Re: Chromium's support for CORS and UMP

Boris Zbarsky wrote:
> On 5/11/10 12:27 AM, Nathan wrote:
>> This leaves us in a scenario where it is the norm to download, install
>> and trust an application that runs in the browser
> 
> Perhaps.  The difference is that it's much harder to do a drive-by app 
> install.
> 
>> agree~ish, imho it's more the user giving the Site A potential access to
>> all the data from Site B which the user has permission to see; if the
>> browser pops up that facebook is trying to access company-payroll then
>> surely the user will be able to make a pretty informed decision..?
> 
> You assume the server the company payroll is on is something the user 
> will recognize.  What if it's just arachnid.mycompany.com?  Or even 
> a1273.mycompany.com?  Or cluster-16-machine-2.mycompany.com?
> 
>> agreed, perhaps if that decision was taken out of their hands for
>> critical resources which they didn't control (like company-payroll) it
>> would be a better scenario... perhaps one could use something like cors
>> to limit xhr access to critical resources..
> 
> The problem is that there is _already_ a large installed base of 
> critical resources that is not protected with CORS.  Rule #1 of security 
> is that you don't introduce new functionality that's insecure unless the 
> entire rest of the world gets updated to handle it.  It won't be 
> updated; certainly not in a timely manner.
> 
>> again, agreed - hence why I'm suggesting that it's better to flip the
>> scenario order, allow unless denied, as the people who are in charge of
>> security of sensitive data should really be doing the studying for their
>> chosen career, not joe public who just wants to put up a few blog posts,
>> social profile etc on a shared host - how do you explain to joe that his
>> public profile isn't viewable in his public profile viewer unless he
>> changes some CORS headers on his shared host that he doesn't have access
>> to?
> 
> Sounds to me like the public profile viewer is broken (e.g. is using the 
> wrong tool to do the job).  XHR is NOT the right tool for everything.

agreed that XHR is not the right tool for everything - but fact remains 
that right now we could all be using pretty much universal applications 
for commons task in the browser on our plethora of devices - an idea 
which is being sparked in many via HTML5 and with all good ideas, it 
will happen - with or without CORS; it's almost inevitable that somebody 
will make another OS browser that doesn't respect site origin policies 
and CORS, or that webos's like chromium os will completely bypass it / 
use something else.

>>> 1) You could have the UA ask your server to perform the action
>>> cross-site itself. No one's stopping you from doing that.
>>
>> well noted - yet the whole point of making a purely client side
>> application
> 
> Hold on.  Making purely client-side applications isn't exactly an 
> inviolable human right.  There are lots of things that can't be done 
> purely on the client side... yet.

yet.. but soon, as I'm sure you realise.

>> is so that it doesn't need a server and uses the web as a
>> distributed data tier.. not to mention http cache'ing and bandwidth
>> issues, it creates a silo, a bottleneck, adds in a layer where all a
>> users data in-out can be monitored and collected, more.. there are many
>> implications to this.
> 
> Sure.  It's just the only way to make this work right this second.

this may be the point.. but, if one were to make something else to 
access resources on the web, in a browser, through JS, then it'd pretty 
much just be XHR or browser extensions all over again wouldn't it..?

>>> 2) You could use one of the APIs being proposed that do not send
>>> user credentials and hence might well not require CORS, since
>>> they will only be able to fetch public data (modulo the
>>> firewall/routing issue, which will indeed need to be resolved).
>>
>> ? what's the point in CORS if it can be completely circumvented?
> 
> What would be circumvented?  The point of CORS is that site A can't 
> pretend to be user U to site B and get information it couldn't get by 
> running wget on one of site A's own servers.

indeed, that's the issue - I'm not saying it's not an issue, or that 
CORS shouldn't address it - merely that imho it's addressing it the 
wrong way round, the exact same scenario could be stopped, using CORS 
with an inverted model of allow then deny.

> If a browser exposes an API to site A which does exactly what wget from 
> site A would do, then there's no reason to apply CORS: site A could 
> already get the data it's getting.  The only difference is addressing 
> the issues you correctly note with site A proxying all the access 
> through its server.

exactly, but the current set up stops xhr from getting resources that 
the could be retrieved from site A with wget - with an inverted model 
all the issues would disappear, leaving only one issue; namely informing 
sys admins that they must protected their resources that need protected 
- and this is their job after all.

Honestly, I understand the vendors not wanting to introduce an 
insecurity in to corporate networks through their browsers, but it was 
an issue, it was addressed by same origin policy and cors, but perhaps 
it was addressed wrong and the mistake needs corrected - it'll have to 
be one day as we both know; even if it's with a different set of tech's 
that run in a browser called something-not-xhr-but-does-the-same and 
something-not-called-cors-but-does-the-same.

>>> No. The whole point here is that just because a user visits your site
>>> doesn't mean that your script should be able to impersonate that user
>>> when talking to other sites. That means either not allowing your
>>> script to talk to other sites unless they explicitly say they're ready
>>> for that sort of thing (current setup + CORS) or not allowing your
>>> script to impersonate the user in the requests it sends.
>>
>> granted, conflation over the current common setup, and the scope I'm
>> talking about which is using client side certificates over https,
>> restful and stateless - no impersonation can happen :)
> 
> That's not necessarily true.  If you can send requests of your choosing 
> using the user's client certificate to sites of your choosing and read 
> the resulting responses... how is that not impersonation of the user?

as you said above, the problem is stopping siteA pretending to be siteB, 
this makes it impossible, what it leaves is the scenario you note above. 
Where an application in a browser with web access could access a 
resource using a client certificate (after the user allows it) and then 
send that information to a third party (after the user allows it / 
requests it to happen); and this there are many ways to address, notably 
simply by keeping the user informed of what's going on, providing a 
small ui button that puts the browser in confirm every action mode when 
working with sensitive data, trusted zones, whatever.

Best & thanks for taking the time so far to go through all this,

Nathan

Received on Tuesday, 11 May 2010 05:12:18 UTC