Decisions by the group

At yesterday's conference call I was asked about my opinion what to do in the PING working group. While I tried to answer that question on the phone I am not sure I got my points accross. Here is an attempt to summarize my thoughts. 

There are three aspects for group members to decide:

1) What is the scope of a guidance document? 

The W3C covers a pretty broad scope of work. In response to Robin's writeup that was focused on privacy guidance for those who develop JavaScript APIs I argued that this scope is too narrow. There is other work in the W3C that is looking for guidance as well. My favorite example is CORS.

What does the group think is useful to cover?
 
2) Who is the target audience of the recommendation?

I had shared my views about this topic already in previous mails and tried to explain what the difference between the standardization community, implementers, and deployment is.

The group has to decide what the audience is because the recommendations will be different for these different groups.  

3) What is your model for privacy protection?

Before going into the details of providing guidance it is useful to think about the main direction.
I have seen different themes (and all have their pros & cons). Here are some examples (and one could combine different approaches):

a) Notice and Consent model

Before the collection of data, the data subject should be provided with a notice of what information is being collected and for what purpose and an opportunity to choose whether to accept the data collection and use.

There are also further design aspects about when this consent should happen. One model is to push it to contracts (e.g., terms of service and privacy notices when you sign up for the service) and another model is to ask the user at the time of sharing (in real-time). As a simplified summary, the latter may require additional specific work or integration of some protocol mechanisms (e.g., OAuth) and the former doesn't. 

The challenges here are that sometimes (often?) the users aren't asked before sharing happens. An example from an article published yesterday about the reality: http://www.cultofmac.com/179733/19-of-ios-apps-access-your-address-book-without-your-permission-until-ios-6-report/

We also want to avoid notification fatigue and want to provide a good choices instead of take-it or leave-it schemes (which we see all too often today). Here is an example of a better permission dialog (of course a fake screen): http://blog.benward.me/post/968515729

This sounds like one has to standardize the user interface but this is not necessary. Instead, one can talk about the user interaction in an abstract way. Barry Leiba provide an example in this document for OAuth: 
http://tools.ietf.org/html/draft-leiba-oauth-additionalsecurityconsiderations-00

b) Data Minimization
      
With this approach the idea is that you figure out what data you need as a bare minimum for your service to work and design the system accordingly. Then, the end devices only provides the data to various service providers that they really need. This is an approach that is often chosen by researchers since it has a lot of impact on the overall system design. The beauty of this approach is that when information is not available to a party then that party cannot leak it or cannot share it in a a way that violates the user's expectations. 
 
One challenge is that those who design the system don't like to restrict themselves.

c) User preference indication

This model is the result of realizing the some parties already get data anyway (or have it already) and so we want to tell them how to use it. This is the GEOPRIV sticky policy approach or, a more recent approach, the DNT header. 

The drawback of that approach is that it heavily relies on data protection authorities (DPAs) to enforce non-compliance. Whether there will be any enforcement remains to be seen. 

Received on Friday, 20 July 2012 11:59:15 UTC