- From: Robin Wilton <wilton@isoc.org>
- Date: Thu, 29 Jan 2015 13:46:16 +0000
- To: Joseph Hall Lorenzo <joe@cdt.org>
- CC: ext Singer David <singer@apple.com>, Bjoern Hoehrmann <derhoermi@gmx.net>, Wenning Rigo <rigo@w3.org>, "public-privacy mailing list) (W3C" <public-privacy@w3.org>
- Message-ID: <F03777D0-2B90-4D08-B5D8-69E76A70E198@isoc.org>
Hi folks - just catching up on this very interesting thread after a few days off. I think Joe raises two important questions below, under (a) and (c ). Some comments inline... On 26 Jan 2015, at 18:38, Joe Hall <joe@cdt.org> wrote: > On Mon, Jan 26, 2015 at 4:33 AM, David Singer <singer@apple.com> wrote: >> Oh dear, I am clearly explaining this badly. > > Thanks much for this, David. I definitely see it clearly now. > >> I think it’s interesting in a number of respects: >> >> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private > > I guess traditional client privacy tools see the servers as potential > adversaries, so leaking an indication of intent in terms of private > browsing could be a risk (e.g., server says, "ooooh, this session I > would have associated with another session seems to want me not to > link those two sessions... in fact, I'll label it as 'stuff this > person really doesn't want people to know about'"). Here I guess this > isn't clearly a leak of "I'm trying to be private, mom!!!" since it > could very well be just a different person's session using essentially > the same UA/env as a previous person. This makes me wonder if existing > tools to segregate "persona"-like elements (accounts on an OS, > profiles for something like Mozilla products) don't do that enough? or > maybe they're too heavy? > > Do you see a need for a server-side personae compliance spec, David? > (Or am I thinking too far ahead or making this too complicated?) Right - David is suggesting, if I understand it correctly, that users should be able to associate an identifier with a given private browsing persona - such that any private browsing sessions initiated under that persona share the same identifier. So - as Joe suggests below - I might use persona A when browsing a job vacancies site, and persona B when *cough* ‘looking at content online’… My initial reaction was that adding an identifier for each persona just increases the linkability of data gathered by the server. But then, I guess, if the server is recording browser-independent identifiers like IP address, then the “per-persona” identifier does not make things much worse. > >> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on. > > Interesting, would servers be at liberty to simply link all the > personas they identify as likely the same user? (e.g., using fancy > analytics like typing analysis, etc. to tell if two different persona > are in fact the same person) That would seem to be a good part of the > bargain to have here... and perhaps this isn't as complicated in terms > of server compliance as TPWG/DNT? > >> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on. > > Forgive me again... are you saying that by being able to have as many > persona as I can keep track of that I'm "articulating" (a social > science term of art, sorry) different aspects of my being that I'd > rather servers not link together? That is rather interesting. For > example, you could have a persona for activities that you want privacy > of a certain level (say me looking at job candidate websites online) > and another persona for activities of a higher level (say, if I'm > looking at content online that I'd rather not have linked to my > not-so-private self)? I think this kind of persona ‘articulation’ is key to online privacy. It is intimately linked with the way we understand privacy in real life. We represent ourselves differently to, say, our doctor, our employer, our spouse, our children (NB - this is not deception; it’s selectivity. Representing oneself differently according to context does not imply a lack of integrity on the part of the individual). If users cannot selectively represent subsets of their attributes online, then privacy really is dead. Or at least in a possibly-reversible coma. However, as above, we have to be mindful of the fact that persona separation at the client side can only achieve so much. If servers are able to “re-connect” personas that the user is trying to keep separate (for instance, by linking identifiers over which the user has no control), then the goal of "privacy through persona separation" is at risk. Just as Helen Nissenbaum’s is the seminal work on contextual integrity, I think Andreas Pfitzmann’s paper on anonymity/unlinkability is the definitive work on “re-connecting” personal data… Hope this helps - as I say, a very interesting thread… Robin > > thanks again, Joe >
Received on Thursday, 29 January 2015 13:46:47 UTC