W3C home > Mailing lists > Public > public-xg-webid@w3.org > April 2011

Re: WebID security picture

From: Henry Story <henry.story@bblfish.net>
Date: Fri, 8 Apr 2011 12:11:17 +0200
Cc: WebID XG <public-xg-webid@w3.org>
Message-Id: <DE8210D0-EB4E-47AA-B2C7-39A70EAD0299@bblfish.net>
To: Mo McRoberts <Mo.McRoberts@bbc.co.uk>

On 8 Apr 2011, at 11:40, Mo McRoberts wrote:

> Hello folks,
> 
> I've been thinking on the security aspects of WebID since some conversations with Henry last week, and I wanted to outline them here to get some sense of collective opinions (and share mine, naturally). My take is that there are some gaps in the picture, but that they're possibly resolvable in some fashion or another.
> 
> The default scenario — the one which most efforts to date have focussed on — is that the private key for a WebID certificate is managed by the individual, on their own client device, while the FOAF RDF is served by what amounts to an untrusted entity (even if it might well sign the RDF).
> 
> Further, the subject URI named in the certificate and the FOAF RDF *is* the identity of the certificate-holder.
> 
> Confirming that the agent on the end of an SSL connection is the one who holds a particular public key is easy.
> 
> Confirming that the agent on the end of an SSL connection is the same as the agent with the identity described by the FOAF RDF is less easy.

Indeed, we do not deal with confirming the other details in the WebID Profile. Consider that every fact requires different types of verification. 
 
1. To prove that the owner of the cert is the owner of the cert, you use cryptography to prove he has the private key. 
2. To prove that he is identified by the WebID you use the WebID protocol (you check the profile)

Proving the rest of the foaf is undetermined. But there are many policies you can use, depending on the degree of security you need. One such policy would be to assign a certain level of trust to people known to friends of yours. You use your friends as filters on what other people say about themselves. 

This is how most social networks function today. We can just use linked data to do the same on a global scale.

> 
> Consider an attacker who wishes to impersonate you, and has gained access to the [untrusted] server where your RDF is published.

In that case your security has been breached. You have to hope one of your friends will notice something odd, and alert you. By the way if someone gains access to your laptop and your keychain password you are also in deep trouble.

> All they need to do is generate a certificate with a SAN matching yours, and modify the RDF to include their public key. As far as a relying party is concerned, they *are* you, because you verifiably have the same subject URI.

yes.

> 
> Worse, even if the RDF is signed in some way — be it via SSL (often inconvenient) or a detached signature (workable) — then this problem isn't mitigated, because they can just sign the RDF using their own key as though they were you.

yes, that's why I don't think using signatures on the foaf helps.

> 
> The big question is whether this is a problem which actually needs to be solved, and if so, how?

I think the best solution is to create more secure operating systems. 

> 
> In other words:
> 
> - Is it simply the case that if an attacker gains access to the server hosting your RDF, all bets are off? If so, this means that relying on a third party to serve that then becomes a matter of calculated risk and trust (and for large-scale adoption, does this then mean that you're trusting, e.g., Facebook not to hijack your identity)?

500 million people (accounts?) are trusting Facebook not to hijack their identity. Facebook's valuation is dependent on them not to be seen to be doing that. 

You would have a different trust relation if you had your own FreedomBox. Then you would still be trusting other people: the device manufacturer, and the OS and software people. But the box agent and the user would be in 1-1 pairing. This is the box could justifiably say "I" for you. In the case of Facebook or company sites you should be using the first pronoun plural "we". In those spaces your are identified as part of a larger agency, that in part controls what you say, but also can confer prestige on what you say, as for example if your identity were tied directly to a university. 

> This would seem to me to be less than ideal: if everybody has to have a web server that they can trust, the barrier to entry is significantly raised IMHO, even if it would be a good thing in general.

Nothing prevents large social networks adopting this. 

> 
> - Is the subject URI an adjunct, and relying parties should instead pay attention to the keys instead? Clearly, the way that WebID is structured suggests the answer is “no” — certificates [and so keys] are deemed to be essentially disposable, and you can by design have multiple certs+keys referring to the same subject — one per browser/device if you wish.

URIs make it much easier to create linked data, and so a social web. One could create public key based URI schemes, but that requires a lot more to be built.

> 
> - Does there need to be a 'shared' key which is associated with both the certs and the RDF in some way, and only you hold?

No there does not need to be such a root key. I don't think it is excluded as a future option.

> This solves the problem, but complicates the processes — you need to make sure that you don't lose that root key [naturally], and you need to have it to hand whenever you need to generate a new certificate;

yes, this creates a very strong technical and social problems that cannot be lightly overcome
People will loose public keys or their private keys, or viruses will steal them from their computer - until hardware keys are widely available. To make keys the central point of focus is going to take too much teaching people.

> on the other hand, it does allow a cryptographically strong identity to be maintained for an agent independently of the certificate/browser/device being used (i.e., the public half of the shared key), which will be very useful for certain applications;

Only if you get people to learn to keep the private keys very very safe. I think as WebId grows, the sale of crypto keys will become a business, which will then make options in this space more viable.

> of course, it doesn't preclude multiple 'shared' keys for different identities which you might want to maintain.
> 
> I'd appreciate the thoughts of others (I'm don't know if this might be old ground? apologies if so).
> 
> M.
> 
> -- 
> Mo McRoberts - Data Analyst - Digital Public Space,
> Zone 1.08, BBC Scotland, 40 Pacific Quay, Glasgow G51 1DA,
> Room 7066, BBC Television Centre, London W12 7RJ,
> 0141 422 6036 (Internal: 01-26036) - PGP key 0x663E2B4A
> 
> 
> 
> 
> http://www.bbc.co.uk/
> This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
> If you have received it in error, please delete it from your system.
> Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
> Please note that the BBC monitors e-mails sent or received.
> Further communication will signify your consent to this.
> 					
> 

Social Web Architect
http://bblfish.net/
Received on Friday, 8 April 2011 10:11:52 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:23 UTC