RE: Decisions by the group

Unfortunately I was unable to make last week's call. Here is my perspectives on your three aspects:

1) I feel it would be useful to cover online privacy in regards to website engagement via a user agent. With DNT, E-Privacy Directive and other demands for greater privacy and control there are a lot of organizations look for guidance on how to respond to these demands.

2) I would choose implementers, but I don't see how we target that audience without guidance for deployment as well.

3) Firstly, I would like to extend "Notice and Consent" to include Transparency. What I mean by that is giving users access to the data that has been collected after Notice and Consent has been given. I would also like to add a discussion of Sharing and Portability to your list.

You give a fine example of a consent mechanism for apps, http://blog.benward.me/post/968515729. Though it wouldn't work well for websites as the fatigue you mention would quickly set in. Addressing N&C for websites is a tough issue and the current cookie model makes it harder. If we could find a persistent, ubiquitous means for users to be able indicate sites that they trust or distrust without all the popups, banners and opaqueness (e.g. Trust as4rg.net) we would get a huge high-five from Internet users around the globe.

I hope to make the next call so we can have a lively discussion around these topics.

Warm regards,
JC

-----Original Message-----
From: Hannes Tschofenig [mailto:hannes.tschofenig@gmx.net] 
Sent: Friday, July 20, 2012 4:59 AM
To: public-privacy (W3C mailing list)
Cc: Hannes Tschofenig
Subject: Decisions by the group

At yesterday's conference call I was asked about my opinion what to do in the PING working group. While I tried to answer that question on the phone I am not sure I got my points accross. Here is an attempt to summarize my thoughts. 

There are three aspects for group members to decide:

1) What is the scope of a guidance document? 

The W3C covers a pretty broad scope of work. In response to Robin's writeup that was focused on privacy guidance for those who develop JavaScript APIs I argued that this scope is too narrow. There is other work in the W3C that is looking for guidance as well. My favorite example is CORS.

What does the group think is useful to cover?
 
2) Who is the target audience of the recommendation?

I had shared my views about this topic already in previous mails and tried to explain what the difference between the standardization community, implementers, and deployment is.

The group has to decide what the audience is because the recommendations will be different for these different groups.  

3) What is your model for privacy protection?

Before going into the details of providing guidance it is useful to think about the main direction.
I have seen different themes (and all have their pros & cons). Here are some examples (and one could combine different approaches):

a) Notice and Consent model

Before the collection of data, the data subject should be provided with a notice of what information is being collected and for what purpose and an opportunity to choose whether to accept the data collection and use.

There are also further design aspects about when this consent should happen. One model is to push it to contracts (e.g., terms of service and privacy notices when you sign up for the service) and another model is to ask the user at the time of sharing (in real-time). As a simplified summary, the latter may require additional specific work or integration of some protocol mechanisms (e.g., OAuth) and the former doesn't. 

The challenges here are that sometimes (often?) the users aren't asked before sharing happens. An example from an article published yesterday about the reality: http://www.cultofmac.com/179733/19-of-ios-apps-access-your-address-book-without-your-permission-until-ios-6-report/

We also want to avoid notification fatigue and want to provide a good choices instead of take-it or leave-it schemes (which we see all too often today). Here is an example of a better permission dialog (of course a fake screen): http://blog.benward.me/post/968515729

This sounds like one has to standardize the user interface but this is not necessary. Instead, one can talk about the user interaction in an abstract way. Barry Leiba provide an example in this document for OAuth: 
http://tools.ietf.org/html/draft-leiba-oauth-additionalsecurityconsiderations-00

b) Data Minimization
      
With this approach the idea is that you figure out what data you need as a bare minimum for your service to work and design the system accordingly. Then, the end devices only provides the data to various service providers that they really need. This is an approach that is often chosen by researchers since it has a lot of impact on the overall system design. The beauty of this approach is that when information is not available to a party then that party cannot leak it or cannot share it in a a way that violates the user's expectations. 
 
One challenge is that those who design the system don't like to restrict themselves.

c) User preference indication

This model is the result of realizing the some parties already get data anyway (or have it already) and so we want to tell them how to use it. This is the GEOPRIV sticky policy approach or, a more recent approach, the DNT header. 

The drawback of that approach is that it heavily relies on data protection authorities (DPAs) to enforce non-compliance. Whether there will be any enforcement remains to be seen. 

Received on Tuesday, 24 July 2012 17:13:40 UTC