- From: Bruno Harbulot <Bruno.Harbulot@manchester.ac.uk>
- Date: Mon, 12 Jul 2010 19:41:02 +0200
- To: Henry Story <henry.story@gmail.com>
- CC: Harry Halpin <hhalpin@w3.org>, Tim Berners-Lee <timbl@w3.org>, Jeffrey Jaff <jeff@w3.org>, foaf-protocols@lists.foaf-project.org, Ian Jacobs <ij@w3.org>, Ivan Herman <ivan@w3.org>, www-archive <www-archive@w3.org>, Thomas Roessler <tlr@w3.org>
On 12/07/2010 01:30, Henry Story wrote: > =On 12 Jul 2010, at 00:04, Bruno Harbulot wrote: >>>> >>>> Digesting. Overall, great to follow, +1 to Bruno's points re >>>> security. >>> >>> You mean the one where he suggests that the WebId protocol is not >>> more secure than OpenId? Why do you think this is true? I think >>> it is clearly wrong, in fact obviously wrong for a number of >>> reasons of which the simplest is just the relative complexity of >>> the two protocols. >> >> I think you've missed my point. The complexity of the protocol >> isn't really at stake here. > > Well if you make a general statement, without specifying exactly in > what way you think it is less or only equally secure, then the > complexity of the protocol is an issue. You can't ignore that. I did specify why I think it's only equally secure, in my next sentence. The point, again, is that both protocols start off by doing a GET on the URI, whether it's a WebID or an OpenID. That's the first building block upon which the rest relies, and also where the weak link is. Doing a GET is fine of course, but the document retrieved becomes the primary trusted source of information, for verifying the identity of the user. For enhancing security, we need other trusted sources of information that can corroborate the public key (because that's what the authentication is based). >> Regarding security, what matters is the weakest link in the chain: >> it's the same for OpenID and FOAF+SSL/WebID (without building a >> cryptographic web of trust). > > We have to be clear what the server knows at the end of the WebID > authentication. At that point it knows just that the agent at the end > of the SSL connection is identified by that WebID, that that WebID > refers to that entity. Yes, but for security to be there, you need to assert (with various degrees of strength, depending on the level of assurance required) the binding between the identifier and the person. That's what authentication is about (regardless of what you know about the user). > The Web of Trust then plays a role afterwards for issues of > determining how much you trust what that agent says about himself. > One element of that web of trust could be that you are related in > some way to a bank, that is itself listed in the government banking > list. Which is pretty much how we get to trust banks now. Yes, but you're just building a network of reputation between identifiers. What I'm disputing is the level of security that really ties the identifier with the physical person. >> In both cases, whoever controls the hosting of the WebID controls >> the way the physical user may be challenged to prove their >> identity. > > Not sure what you mean by "the way the user may be challenged". That > service controls the WebId document. The service that controls the > challenge is the relying party the user is logging into. It requests > the client certificate. Well, for users to authenticate successfully with a WebID, they have to show that they have the private key matching the public key of that WebID (which is done via the SSL handshake and the comparison of the public key with the one if the FOAF document for that WebID). That's how users are challenged in our model, to prove the URI is theirs. OpenID is a bit more flexible in the end mechanism, but again, at the end of the day, the path to the "correct answer" is dictated by the hosting server. >> For more important services such as banking, I don't want anyone >> who's not me or my bank to be able to impersonate me, or >> effectively change my password (or public key in our case). I don't >> want to have to wonder whether my hosting company is sufficiently >> honest or offers sufficient protections against hacking on their >> server for this sort of services. > > That is not a problem. Your bank could issue a WebId too with its > certificate, and in fact it could make sure during the SSL connection > that the browser only sent a cert issued by it. Sure, but then you're using a hierarchical PKI, and the WebID isn't used for authentication. >> It's a matter of risk assessment. If someone was to tamper with >> your CV on your website, you'd be upset, but you would certainly >> manage to limit the damage. If someone tampered with your FOAF >> document and if your bank allowed connections into your account >> simply on the basis of what the document dereferenced from your >> WebID contained, you'd be more worried. > > This is just an issue of who you trust for what. I may trust your > bank about whether you will pay me a certain sum, but not about your > real business motives, your girlfriends, your music taste, etc.... I > certainly would not trust your bank about any musical taste to be > honest, even less than I trust them about money, given all the > amazing things that happened this last year in the banking industry. > > People in the security business have tended to treat banks as divine > creatures. I think we can drop that now. They are human, all too > human. Sure, they're not perfect (and sometimes their security is far from great), but it's again about how their mechanisms go sufficiently far to check you're the legitimate user and not someone else. It's not about who you trust for what, it's about how you trust them to authenticate the user. >> OpenID and WebID, by making the URL and its dereferencing as the >> primary pillar of trust, is fine for a large number of use-cases >> (most social networking frameworks, blogs, ...), but when the legal >> and financial consequences have higher stakes, you'd want a higher >> level of assurance. > > But when you want to meet a girlfriend, your future wife, then I'd > trust your social network more than your bank. Again, only if your social network (online) has checked appropriately that the person they've been talking to in real life has indeed that particular handle online. This is not always the case (especially when you first get friend requests on Facebook for example: do you systematically contact that person via another means to make sure it's them?) >> As it stands, WebID by simple dereferencing, without a >> cryptographic web of trust offers the very same level of assurance >> as OpenID. > > No, it is much more secure. - no spelling mistakes by the end user as > he types his id - less complex protocol (less errors therefore on > that level possible) - forces HTTPS, which is well documented. The spelling mistake use-case is really a detail: - There will soon be plugins for browsers that fill in your OpenID (there probably are some already). - You can go on a few OpenID-enabled websites now, they'll let you choose an OpenID provider (e.g. Google) and you'll log on quite easily from there. >> Building a network of reputation (which is what you call "web of >> trust") by linking WebIDs as friends of other WebIDs is pointless >> if you can't make sure that the physical person gaining access >> WebID is indeed who they are. That's something that a >> (cryptographic) web of trust (a la PGP) can help with (I'm not >> saying it's perfect). > > PGP is also less secure than foaf+ssl. For one it is a lot more > difficult to retract a mistake in PGP, plus PGP leaks information: > you can never undo a friend. See the FAQ: > > http://esw.w3.org/Foaf%2Bssl/FAQ#How_does_this_improve_over_X.509_or_GPG_Certificates.3F "PGP is also less secure than foaf+ssl" is a bold statement, backed by a FAQ that /you/ wrote... What I'm talking about is still not about undoing a friend relationship, it's about checking the association between identifier and person. PGP only makes that sort of assertions. You can sign someone's PGP certificate after checking their passport, you don't have to be their friend or know anything about them. >> The real security enhancement of FOAF+SSL should come from the fact >> we have the cryptographic instruments (the keys) and that we >> *could* go further than effectively just checking who controls the >> hosting of a WebID. There is the potential, but this isn't >> something we do yet. > > Well we do: if you use foaf, you could use the foaf friend network to > establish a certain level of trust. Once more, you're confusing "web of trust" in the cryptographic sense and "network of reputation". FOAF builds links between identifiers not necessarily between real people. The difficulty is to be able to prove that association to a level sufficient for the service using it. If you can't assert with a sufficient level of certainty (hard to quantify indeed) that users really are who they say they are (i.e. authentication), there's no point fetching further data about their social network beyond that. The social network can help with improving the degree of authentication, by confirming the public key, not by saying things about the URI. >> In addition, you seem to assume that certificates reduce the >> complexity. Sure, there are fewer steps in the protocol, but >> there's also a security issue in how users perceive and make use of >> the technology. Unfortunately, most people consider that client >> certificate increase the complexity. > > Because they look at books on the issue which tell them that this is > the case. And indeed until you consider foaf+ssl, client > certificates do seem to create problems. But those problems > dissapear with foaf+Ssl. I think you're forgetting the problems we've had (and still have) to get some browsers to work. >> As such, if they're badly understood, client certificates could be >> badly taken care of by their users, which could in fact make >> things worse. > Ok, so now the argument is one of the psychology of how people may > misinterpret the security of the protocol. Well that's going to be > something that will require a bit of practical education. People > learnt how to do that for online shopping. Companies will need to > work on developing simple games that help make this easy to > understand. I'm only talking from my experience with users of certificates in the academic community. While some of the problems are definitely due to the certificate emission process via the CA (which is where FOAF+SSL provides a clear improvement), other problems are due to the fact some people don't know (or don't have the time or don't want to learn) what private keys, public keys and certificates are. Porting them from one browser to another is always an issue, public terminal access too (and on-the-fly re-generation would be an issue for building a cryptographic web of trust). Of course, the fact that even experts use the words "certificate" and "public key" rather loosely doesn't help (e.g. using a certificate to authenticate also implies using the private key and "PGP public key" actually refers to a certificate.) Large scale experiments of client certificates have been tried and some have failed. It's just a fact. For example, the French tax system was issuing certificates, but had to abandon that system. The way those certificates were issued was relatively simple, but a large number of people were just getting confused (how to back them up, where they were stored, what the difference was between a password on the website and a password to protect the private key...). They also had problems with browser vendors changing the way certificates are handled. I'm not saying things won't improve, but it's just not as easy as you seem to think. Maybe I'm wrong, but I've actually seen users with a reasonably high level of education struggling with certificates (or losing patience), in real situations. I'm not trying to be negative, but I'm definitely less optimistic than you about the convenience of certificates... As I've said, I do believe that FOAF+SSL has the potential to offer a higher level of security, but we're not there yet. I'm only talking of "higher level of security", no system is perfect. Not every service requires the level of assurance I'm talking about, but the level we have at the moment (simple dereferencing) is no better than OpenID. You can get data about the resource from however many trusted sources you want and reason as much as you want with that, but the results will refer to the URI. You can base your authorization decision based on those results, sure, but you also need to make sure the WebID matches the actual user in a way that's appropriate for the sensitivity and risk associated with the service. Fancy reasoning for ACLs is not particularly useful if you haven't checked first that the user at the other end is the right one. Best wishes, Bruno.
Received on Tuesday, 13 July 2010 16:14:41 UTC