Re: Browser UI & privacy - a discussion with Ben Laurie

On 4 Oct 2012, at 14:03, Ben Laurie <benl@google.com> wrote:

> 
> 
> On 4 October 2012 12:12, Henry Story <henry.story@bblfish.net> wrote:
> 2) Practical applications in browser ( see misnamed privacy-definition-final.pdf )
> 
>    a) It is difficult to associate interesting human information with cookie based
>    identity. The browser can at most tell the user that he is connected by
>    cookie or anonymous.
> 
> The presence of a cookie does not imply the absence of anonymity -

Without cookies you still have ip address tracking it is true. So add Tor to the mix 
and we get closer. Finding the right icon(s) is something for artists to work on.

> its hard for the browser to say much beyond "cookies" or "no cookies". And having said it, it is not clear what the user would learn, at least in the "cookies" case.

He knows that the history of his interactions with a person are taken into account. 

That is fundamentally different from when I read a book, for example. When I read a book 
the author does not know that I read it, what I underlined, whom I gave it to, etc. Not having 
cookies  brings us closer to the understanding of a web page as something like a page 
in a book. It can increase the trust we have in what is said as being publicly stated.


>  
> 
>    b) With Certificate based identity, more information can be placed in the
>     certificate to identify the user to the site he wishes to connect to whilst
>     also making it easy for the browser to show him under what identity he is
>     connected as. But one has to distinguish two ways of using certificates:
> 
>       + traditional usage of certificates
>       Usually this is done by placing Personal Data inside the certificate. The
>    disadvantage of this is that it makes this personal data available to any web
>    site the user connects to with that certificate, and it makes it difficult to
>    change the _Personal_Data (since it requires changing the certificate). So here
>    there is a clash between Data Minimization and user friendliness.
> 
>       + webid usage:
>       With WebID ( http://webid.info/spec/ ) the only extra information placed in the
>    certificate is a dereferenceable URI - which can be https based or a Tor .onion
>    URI,... The information available in the profile document, or linked to from that
>    document can be access controlled. Resulting in increasing _User Control_ of whome
>    he shares his information with. For example the browser since it has the private key
>    could access all information, and use that to show the as much information as it
>    can or needs. A web site the user logs into for the first time may just be able
>    to deduce the pseudonymous webid of the user and his public key, that is all. A
>    friend of the user authenticating to the web site could see more information.
>        So User Control is enabled by WebID, though it requires more work at the
>    Access control layer http://www.w3.org/wiki/WebAccessControl
> 
> You continue to miss my point here, so let me spell it out.
> 
> Suppose the user, using access control, decides to allow site A see all his data and site B to see none of it. Site B can, nevertheless, collude with site A to get access to all the user's data. First, when the user accesses site A, site A takes a copy of all his data and links it to his public key. Next, the user logs into site B, which tells site A the user's public key. Site A returns the user's data, and now site B also knows it.

Let us make this more precise. Call the user U.

When U tells A something P we have 

   U tells A that P. 

When A tells B we have

   A tells B that U said P.

So there are two problems for B
 (1) B has to  trust that what A told him is true. B knows that A is breaking the law,
   by passing on confidential information, so why should B be sure he can trust A
   on this information. ( This is the problem that comes up in the old westerns of the
   gangsters that need to team up to rob a bank, and who end up shooting each
   other off one by one )

 (2) if B comes to use information P in a decision that he might have to justify in 
 court, then B will not be able to justify it as having come from A. He will have to 
 defend himself as having received the information from A, and A will be liable 
 for breach of confidentiality.

Since there are an infinite number of ways that any piece of information can merge with
other information in order to produce action, it is very risky for A to give anyone
information about U, since it will be very likely that somehow this information will
be tied to some other information and leak out in a way that A can be held
responsible for the leak.

This is very similar to the way privacy is dealt with in everyday life in most situations,
and it is why we made the point that privacy is not anonymity. 

> 
> Clearly if the user uses a different certificate at site B, B and A can no longer collude in this way.

But neither can people communicate and create social networks. So yes, if you 
put a computer in your cave with no internet connection, then you will have a lot 
less danger of information leakage. But neither will your data sharing be as 
efficient - ie: you will need to use pre-internet technology to work with other people. 

We are trying to use internet technology to allow agents to work together in 
ways that reduce the number of unnecessary intermediaries down to 0.
We are not trying to create social networks that make humans incapable of
stupidity, duplicitousness, or other flaws.

In any case you don't need a lot of information to be able to make identity claims.
Three or four relations plus enough background knowledge suffices, as many
reports have shown.

Furthermore most people identifying themselves on the internet are using e-mail identifiers 
- global ids, and it is even encouraged by systems such as Mozilla Persona
http://www.mozilla.org/en-US/persona/ . How many people logging in to a site
do you think will get very far without giving identifying information out?

So this aim for perfection both makes it difficult to create distributed social 
networks, increases  centralisation as a result - leading to less privacy, and 
does not have the intended consequence in most cases of increasing pricacy.

I understand that WebID is not the tool to use if you wish less than 
pseudonimity. And there are cases where less than pseudonymity is
important. But there is also a time when you need more.

Henry

> 

Social Web Architect
http://bblfish.net/

Received on Thursday, 4 October 2012 12:49:10 UTC