- From: Henry Story <henry.story@gmail.com>
- Date: Mon, 16 Feb 2015 12:52:23 +0100
- To: Michiel de Jong <michiel@unhosted.org>
- Cc: public-webid <public-webid@w3.org>, Read-Write-Web <public-rww@w3.org>, Sylvain Le Bon <sylvain.lebon@openinitiative.com>
- Message-Id: <89ECAEBE-D9B2-4A48-BB9C-D1B5BB857C88@gmail.com>
> On 16 Feb 2015, at 00:43, Michiel de Jong <michiel@unhosted.org> wrote: > > Interesting proposal! I don’t know a lot about combining cors with client certs (just found https://connect.microsoft.com/IE/feedback/details/1028302/ie11-cors-preflight-request-is-aborted-when-server-requests-client-tls-certificate <https://connect.microsoft.com/IE/feedback/details/1028302/ie11-cors-preflight-request-is-aborted-when-server-requests-client-tls-certificate>stating it's broken in IE11). Thanks for bringing that up Michiel. My post was until that point purely at the theoretical level up to this point :-) So though it is difficult to read the specification it seems that the preflight should not contain authentication information http://www.w3.org/TR/cors/#cross-origin-request-with-preflight ( Can anyone point exactly to where that is written? ) If that is correct then the server responding to the pre-flight request will not be able to find out the identity of the user in order to correctly populate the Access-Control-Allow-Origin fields in the response, as I had suggested. > For cookie-based systems, the usual trick is to switch to bearer tokens in the Authorization request header (that's how we added cors to hood.ie <http://hood.ie/> for instance). But for client certs you obviously don’t want to do that, so a user-supplied white-list like you propose could work, I guess. I am not sure how that could still work. What is really needed is to understand what CORS is trying to do. First one should look at what secureity requires intutively, and then one should ask for CORS why they move away from what is intuitively simple. As I understand things should be relatively simple: 1) for requests (GET, PUT, POST, … ) on resources that don’t require authentication: The default should be that the JS can do whatever it wants. 2) for requests on resources that do require authentication: a) The browser should only allow the request if the user trusts the JS to make that request. b) The server may need to take into account that the authenticated user is the user+JS. But then we have some weird complexity built into CORS. Looking at the two cases above. I think it would be worth trying to list all these oddities, and then seeing what explains them, so that one can think about better possibliites. 1) requests that do not require authentication q1: why is the origin sent at all? And why are there still restictions? q2: why does POSTing a url encoded form not require pre-flight? But why does POSTing other data do? 2) Why are the pre-flight requests needed at all? I did ask 1) above in 2012 on webapps list, and the reason they require CORS, is apparently that the browser builders want to take into account cases where the browser is behind a firewall, and where it may in fact be decided that if you are behind the firewall then you are authenticated. Not a good idea, and even less of a good idea to then build standards on it. But you can then see why in that case you need ( You can read more about this in a thread I started in 2012 https://lists.w3.org/Archives/Public/public-webapps/2012JulSep/0166.html ) I think the pre-flight requests are also required because most servers are not by default built so as to take account of the danger that a nasty JS may be making the requests. So that the browser needs to first check that the server is intelligent enough, by looking at the OPTIONS. The above hypothesis will require more investigation. But assuming they are true, then we could solve the problem via a proxy that we know is on the internet perhaps in a more clever way. Changing the browsers is going to take too long. So the idea is that a WebID enabled proxy, with the ability to act as a secretary for authentication needs on larger servers ( https://www.w3.org/wiki/WebID/Authorization_Delegation ), could just simply solve the problem of knowing if the user is happy to trust the script using something like the method I described in my previous mail. The proxy could then act as the user directly, or with his authority, and so avoid the pre-flight requests. In a linked data world it does not make so much sense it seems for the server to trust only agents coming from certain domains, as any linked data agent, is going to have to jump around all domains. What is important is that the user declare his trust in certain software agents. Having to do this by « Origin » is pretty ugly, but it seems to be the best we have available at present in browsers. > > > > On Sat, Feb 14, 2015 at 10:00 AM, Henry Story <henry.story@gmail.com <mailto:henry.story@gmail.com>> wrote: > Hi all > > Following a Pull Request on rww-play by Sylvain Le Bon ( https://github.com/read-write-web/rww-play/pull/133 <https://github.com/read-write-web/rww-play/pull/133> ) > relating to the question as to how to set the CORS headers for a linked data server, I thought about the issue a bit, > and came to the following conclusion. I already updated the WebAccessControl wiki with the following text: > https://www.w3.org/wiki/WebAccessControl#Cors_User_Agents <https://www.w3.org/wiki/WebAccessControl#Cors_User_Agents> > > Here is the proposal: > > Setting "Access-Control-Allow-Methods" for a particular Agent > A linked Data publisher may want to make a whole set of resources available over CORS. For completely publicly accessible resources that is reasonably easy: one can just add (please check) > Access-Control-Allow-Origin: * > Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept > In read mode that should work fine. (In write mode, one may need to be careful to log the user and the Origin that made the change.) > But what should the server do for any resources that is protected? It cannot in a blanket manner state that the resource is accessible to every Origin. That would make it much too easy for a piece of JavaScript to use the authentication state in a browser to do whatever the designer of the JS wanted rather than what the browser user wanted. But if the server selects a particular Origin that it trusts, then that would limit the growth of JavaScript applications very severely to those known and trusted to the data publisher. > It should really be up to browser user to specify which JavaScript it trusts ( sadly this can only be done with the extremely coarse Origin tool ). The suggestion is therefore that the user's WebID contain a list of trusted origins, and that the server use those to decide what Origin to add to the header: > <#i> acl:trustedOrigin <http://apps.w3c.org <http://apps.w3c.org/>/>, <http://apps.timbl.name <http://apps.timbl.name/>>, </> . > The server after authenticating the user, would then add those origins to header. If we want to allow that one trusts some origin for all read operations, but only some for write operations then something more complex would be needed such as > <#i> acl:trustedOrigin [ acl:mode acl:Read; > acl:agentClass foaf:Agent; > acl:accessToClass foaf:Document; //<- give access to all documents > ], > [ acl:mode acl:Write; > acl:accessToClass foaf:Document; //<- give access to all documents > acl:agent [ acl:origin <https://apps.w3.org/ <http://apps.w3.org/>> ], [ acl:origin <> ] > ] . > The server after authenticating the user, could then use that information to write out what Origin is allowed what action. > > Any feedback? > > Social Web Architect > http://bblfish.net/ <http://bblfish.net/> Social Web Architect http://bblfish.net/
Received on Monday, 16 February 2015 11:53:04 UTC