- From: Jonathan A Rees <rees@mumble.net>
- Date: Tue, 17 Jan 2012 14:19:41 -0500
- To: ashok.malhotra@oracle.com
- Cc: "www-tag@w3.org" <www-tag@w3.org>
Since we're talking about framing... I wonder if we can divide the use cases into two categories: those that have to do with access control, vs. those that have to do with use control. I'm not sure there's a sharp line, but personally I find the distinction useful. Access control is controlling who has access to information. Access is mediated by a platform that is presumed to be trustworthy - that is, I tell the platform who or what is allowed to access (read) the information, and if I've specified the right policy and the platform respects it, then all is well. This to me feels like the domain of security, not privacy, even when the "platform" includes decisions made by real people and organizations such as social networks. The goal is simply to keep information out of the wrong hands. To me "privacy" is mainly about use control, which means controlling what an agent can do with information that it already has for some particular legitimate purpose. Because the agent already has access, this is pretty much unrelated to access control. I would say use control has three parts: 1. Communicating the intended use restriction policy to those who are supposed to follow it. 2. Technical support to make it easier for agents to respect any given policy. 3. Accountability support - technical apparatus to help track accountability, and social and legal apparatus to hold nonrespecting agents accountable somehow. IIUC #1 is what the IETF privacy API work is about (pass policy parameters around), while #3 is what the MIT TAMI project is about. #2 is just a conjecture - it's hard for me to imagine what it would be like. Obviously transforming use control problems into access control problems is desirable, since the latter have technical solutions. Capability systems suggest some techniques for doing this (attenuation, confinement, etc.). But in the distributed computing environments we're talking about this is not always practical, and we have to legal and social pressure as ways to persuade agents to respect use restrictions. Jonathan On Sun, Jan 1, 2012 at 11:21 AM, ashok malhotra <ashok.malhotra@oracle.com> wrote: > Some Thoughts on Privacy > > The W3C has started a DNT WG. This is good, but it only covers a corner of > what I > like to call The War on Personal Privacy. There are several other aspects > we need > to consider. > > 1. Personal information that people entrust to social networks or other > websites with the understanding that it is private or has limited visibility > is leaked to others for profit or due to incompetence. > > 2. Folks collecting information about you without your knowledge or > consent. For example, Google trucks driving by your house and capturing > your network SSID or cellphones capturing location and other information. > > 3. Clickjacking and identifying folks by mouse usage patterns, etc. This may > be a subcase of the above or perhaps a separate category. > > What privacy thieves are after is identity and personal information as well > as attitudes and preferences for marketing purposes. Studies have shown > that it is possible to predict a person’s Social Security Number with a fair > degree of accuracy based on a few pieces of information. Other studies have > shown that sexual and political preferences can be determined from a > relatively small amount of behavioral data. > > What can be done? > > There seems to be little hope that technical solutions can prevent privacy > theft. Encryption, both in transport and storage, can mitigate the > situation but does not provide a complete solution. So, what can be done? > > Weitzner et. al. argue that the only solution is to hold privacy thieves > accountable and prosecute if necessary. For this we need stronger laws. > Europe has stronger privacy laws than America. Is there a policy statement > we can make here? > > Another solution is a social solution. If your social network divulges your > personal information without your consent, make a big fuss, write a blog, > make sure the violation is made public and hopefully the practice will > stop. Should the W3C encourage such social re-activism? > > Perhaps the TAG could publish Guidelines for Protecting Your Privacy in the > age of Web 2.0. > > -- > All the best, Ashok
Received on Tuesday, 17 January 2012 19:20:09 UTC