Re: Browser UI & privacy - a discussion with Ben Laurie

On 10/4/12 8:03 AM, Ben Laurie wrote:
>
>
> On 4 October 2012 12:12, Henry Story <henry.story@bblfish.net 
> <mailto:henry.story@bblfish.net>> wrote:
>
>     2) Practical applications in browser ( see misnamed
>     privacy-definition-final.pdf )
>
>        a) It is difficult to associate interesting human information
>     with cookie based
>        identity. The browser can at most tell the user that he is
>     connected by
>        cookie or anonymous.
>
>
> The presence of a cookie does not imply the absence of anonymity - its 
> hard for the browser to say much beyond "cookies" or "no cookies". And 
> having said it, it is not clear what the user would learn, at least in 
> the "cookies" case.

It does. Here's why, entropy [1][2][3] is well and alive in today's 
world of webby data fragments and exponentially increasing computing power.

>
>
>        b) With Certificate based identity, more information can be
>     placed in the
>         certificate to identify the user to the site he wishes to
>     connect to whilst
>         also making it easy for the browser to show him under what
>     identity he is
>         connected as. But one has to distinguish two ways of using
>     certificates:
>
>           + traditional usage of certificates
>           Usually this is done by placing Personal Data inside the
>     certificate. The
>        disadvantage of this is that it makes this personal data
>     available to any web
>        site the user connects to with that certificate, and it makes
>     it difficult to
>        change the _Personal_Data (since it requires changing the
>     certificate). So here
>        there is a clash between Data Minimization and user friendliness.
>
>           + webid usage:
>           With WebID ( http://webid.info/spec/ ) the only extra
>     information placed in the
>        certificate is a dereferenceable URI - which can be https based
>     or a Tor .onion
>        URI,... The information available in the profile document, or
>     linked to from that
>        document can be access controlled. Resulting in increasing
>     _User Control_ of whome
>        he shares his information with. For example the browser since
>     it has the private key
>        could access all information, and use that to show the as much
>     information as it
>        can or needs. A web site the user logs into for the first time
>     may just be able
>        to deduce the pseudonymous webid of the user and his public
>     key, that is all. A
>        friend of the user authenticating to the web site could see
>     more information.
>            So User Control is enabled by WebID, though it requires
>     more work at the
>        Access control layer http://www.w3.org/wiki/WebAccessControl
>
>
> You continue to miss my point here, so let me spell it out.
>
> Suppose the user, using access control, decides to allow site A see 
> all his data and site B to see none of it. Site B can, nevertheless, 
> collude with site A to get access to all the user's data.

You are setting the site as the atom. We are setting the identity of a 
human or software agent as the atom. There's a significant difference. 
Thus, how can site collusion even be relevant? I am constraining access 
to a resource denoted by <SomeWebDocumentURLorURI> such that its only 
available to an identity denoted by <SomeAgentURI>. Please note how I am 
using *denotation* quite distinctly when referring to a Web Document and 
a human or machine agent. This is fundamental to what WebID is all 
about. It isn't about the site, it's all about the identity of an agent 
(man or machine).

> First, when the user accesses site A, site A takes a copy of all his 
> data and links it to his public key.

What data? The composite of <SomeAgentURI> plus 
<SomeAgentCertificatePublicKey> isn't "all his data" . A key is just a 
key. This is about Web-scale composite keys serving as identifiers that 
facilitate intensional claims. Its about integrating logic into Web fabric.

> Next, the user logs into site B, which tells site A the user's public 
> key. Site A returns the user's data, and now site B also knows it.

See my comment above. That isn't the point. You scope is coarse whereas 
what we are trying to articulate is very fine-grained and ultimately 
about keys and intensionality via logic.

>
> Clearly if the user uses a different certificate at site B, B and A 
> can no longer collude in this way.

Of course they could, there are many ways to exploit equivalence by name 
(denotation) or co-reference semantics via logic which can drive these 
ACLs. For instance, you can have multiple WebIDs in a certificate which 
delivers the aforementioned semantics implicitly. You can have a WebID 
resolve to a profile document where the aforementioned semantics are 
expressed explicitly. In either scenario the co-reference claims a 
signed and verifiable.

Links:

1. 
http://privacy-pc.com/news/how-to-hack-facebook-account-2-using-lcg-for-facebook-profile-hacking.html
2. 
http://howto.cnet.com/8301-11310_39-57368016-285/how-to-prevent-google-from-tracking-you/
3. 
https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy

-- 

Regards,

Kingsley Idehen	
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Received on Thursday, 4 October 2012 13:04:06 UTC