W3C home > Mailing lists > Public > semantic-web@w3.org > March 2008

Re: [foaf-dev] Re: privacy and open data

From: Story Henry <henry.story@bblfish.net>
Date: Thu, 27 Mar 2008 09:28:12 +0100
Cc: Peter Ansell <ansell.peter@gmail.com>, kidehen@openlinksw.com, "Phil Archer" <parcher@icra.org>, "Semantic Web" <semantic-web@w3.org>, "foaf-dev Friend of a" <foaf-dev@lists.foaf-project.org>
Message-Id: <8F6D1E51-53E5-4FA6-A2BA-7D09EECFCF3B@bblfish.net>
To: Karl Dubost <karl@w3.org>

On 27 Mar 2008, at 04:01, Karl Dubost wrote:
> Our written culture is the ossification of our identity.

Not really. Because people can lie about you, be mistaken, be at odds,  
have different interpretations of the same circumstances. etc.
You find a statement on the internet about someone? Can you believe  
it? No, not necessarily. I receive email every day that I won a  
million pounds. I never look. It goes straight into the recycle bin.

So how do people come to believe anything?

The  beauty of foaf (and Linked Data) is that I can hand out a URL at  
parties and conference which can be dropped  into Semantic Address  
Books such as Beatnik which will therefore contain always up to date  
information about me. This URL  will be trusted as being information I  
am maintaining about me. People saw me hand it out, they see it on my  
business card. They know my business relies on this information. They  
trust it and me. We have a trust economy.

They will also trust that the people I link to in the foaf file are  
real. This is because people will make some decisions on the people in  
my foaf file, such as perhaps asking me for an introduction. Others  
will use these stated relationships to be used, as the DIG group did,  
to grant access rights. Who you put in your foaf file is not without  
consequences. Others will only allow certain detailed information  
about them to be visible to friends of their friends (as specified by  
relations between foaf files).

we do this? How do we identify who is looking at the data so that we  
can give them more or less information.

Now clearly we can develop a vocabulary that makes statements about  
what information can be copied. Copyright law and database law, may in  
fact restrict quite a lot of what can be republished. Clearly though  
if I have access to some information I should also be able to reuse  
it. But I don't think developing copy right vocabs is the most  
important thing to do. Because once information about me is copied and  
made visible to people who don't have access to the original, then the  
new information provider is responsible for the info. Now if the  
information he republishes is private, then he has to defend the  
statements he makes - which can be very costly. The more private the  
information is, the less likely he will be able to defend it. If he  
can't defend the information he republishes, he should not do so.  
Simple really.

So what we need is a way to make information accessible only to some  
sub group of people. This is where I was thinking we need a very  
simple protocol:

1. An HTTP request for a resource return a minimal representation with  
a some HTTP header requesting the users identity
2. the user identifies himself using one of his foaf URLs.
    encrypts some string and token with his private key
3. the server fetches the foaf file gets the public key
    decrypts the string given in 2 with the public key
    then it can decide if the person is allowed more access to those  

That is just the outline of the minimal HTTP protocol that is needed  
to create private spaces in an opend data network . This should be  
very simple to develop. More complex systems can be built on top of  


Received on Thursday, 27 March 2008 08:29:17 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 5 July 2022 08:45:05 UTC