Re: Quick overview of privacy icons (Aza and Arun's W3C position paper from privacy workshop)

Sorry.  Yes, that was meant for the list.  Thank you Mischa.

Brad

On Wed, Sep 15, 2010 at 6:38 PM, Mischa Tuffield <mischa.tuffield@garlik.com
> wrote:

> This email to me, so I have forward it to the list ...
>
> Mischa
>
> Begin forwarded message:
>
> *From: *"B. Kip" <bkpubnet@gmail.com>
> *Date: *15 September 2010 05:46:31 GMT+01:00
> *To: *Mischa Tuffield <mischa.tuffield@garlik.com>
> *Subject: **Re: Quick overview of privacy icons (Aza and Arun's W3C
> position paper from privacy workshop)*
>
> I really like this, particularly the idea of making these machine
> readable.  I think that sort of functionality often ends up being used in
> ways that extend the original vision.
>
> Did Privacy Icons consider at all the point of over-collection?  The list
> of 7 things all deal with what is done with your data once it is collected.
> Is there any way to work at encouraging organizations not to collect
> unnecessary data in the first place?
>
> This may be a more difficult problem to work on, and it may not really fit
> with this particular approach.  I'm just curious whether this problem was
> talked about in the context of this project.
>
> Brad
>
> On Wed, Sep 15, 2010 at 3:14 AM, Mischa Tuffield <
> mischa.tuffield@garlik.com> wrote:
>
>> On a similar note Renato has a paper where he presents Privacy Icons
>> within a the social networking context, the work is based on ODRL, subject
>> of the previous teleconf.
>>
>>
>> http://semanticidentity.com/Resources/Entries/2010/7/1_Virtual_Goods_+_ODRL_Workshop_2010.html
>>
>>
>> <http://semanticidentity.com/Resources/Entries/2010/7/1_Virtual_Goods_+_ODRL_Workshop_2010.html>
>> Mischa
>> On 14 Sep 2010, at 19:09, Harry Halpin wrote:
>>
>> From http://www.w3.org/2010/api-privacy-ws/papers/privacy-ws-22.txt
>>
>>
>> Just cut and pasting it below so people can read before telecon:
>>
>> = Privacy: A Pictographic Approach =
>>
>> Submitted by:
>> Aza Raskin <aza@mozilla.com>
>> Arun Ranganathan <arun@mozilla.com>
>>
>> On behalf of Mozilla
>>
>>
>> Mozilla believes that privacy policies are long documents written in
>> legalese that obfuscate meaning. Nobody reads them because they are
>> indecipherable and
>> obtuse. Yet, these are the documents that tell you what’s going on with
>> your
>> data — how, when, and by whom your information will used. To put it
>> another
>>
>> way, the privacy policy lets you know if some company can make money from
>> information (like selling your email address to a spammer).
>>
>> Creative Commons [1] did an amazing thing for copyright law. It made it
>> understandable. Creative commons reduced the complexity of letting others
>> use your work with a set of combinable, modular icons.
>>
>> In order for privacy to have meaning for actual people, we need to follow
>> in
>> Creative Commons' footsteps. We need to reduce the complexity of privacy
>> policies to an indicator scannable in seconds.
>>
>> Mozilla believes that solving all of the problems with privacy in one go
>> is
>> tilting at windmills. P3P [2] took a taxonomic approach and languished in
>> the
>> exponential complexity. Instead, we should be seeking to answer the
>> question: "What attributes of privacy policies and terms of service should
>> people care about?" and then highlight those attributes. Further, we
>> should
>> only be highlighting attributes of privacy which are not "business as
>> usual"
>> so that users do not become inured by constant warnings.
>>
>> Finally, Mozilla believes that the attributes should be machine readable
>> to
>> encourage user-agent and other innovation.
>>
>> == Only What People Should Care About ==
>>
>> The “should” is critical. Privacy policies are often complex documents
>> that
>>
>> deal with subtle and expansive issues. A set of easily understood and
>> universal icons cannot possible encode everything. Instead, we should call
>> out only the attributes which are not “business as usual”: the warning
>> flags
>>
>> that your privacy and data are at risk.
>>
>> Here's an example. Should we have an icon that lets the you know that your
>> data will be shared with 3rd parties? Isn’t 3rd party sharing
>> intrinsically
>>
>> a bit suspect? The answer is a subtle no. Sharing with 3rd parties should
>> raise a warning flag but only if that sharing isn’t required. The classic
>>
>> example is buying a book on Amazon.com and getting it shipped to your
>> home.
>> Amazon needs to share your home address with UPS and Privacy Icons
>> shouldn’t
>>
>> penalize them for that necessary disclosure. In other words, Privacy Icons
>> should only highlight 3rd party data sharing when you do not have a
>> reasonable expectation that your data is being shared.
>>
>> The “should” is a major differentiator from many of the prior approaches,
>> like the taxonomic P3P or Lorrie Cranor’s crowd-sourced Privacy Duck [3].
>>
>>
>> == Bolt-on Approach ==
>>
>> Privacy policies and Terms of Services are complex documents that
>> encapsulate a lot of situation-specific detail. The Creative Commons
>> approach is to reduce the complexity of sharing to a small number of
>> licenses from which you choose. That simply doesn’t work here: there are
>> too
>>
>> many edge-cases and specifics that each company has to put into their
>> privacy policy. There can be no catch-all boiler-plate. We seem to have
>> lost
>> before we have even begun. There’s another approach.
>>
>> Here’s where we stand: companies need to write their own privacy
>>
>> policies/terms of service, replete with company-specific detail. Why?
>> Because a small number of licenses can’t capture the required complexity.
>>
>> The problem is that for everyday people, reading and understanding those
>> necessarily custom privacy policies is time consuming and nigh impossible.
>>
>> Here’s a solution: create a set of easily-understood Privacy Icons that
>> “bolt on to” a privacy policy. When you add a Privacy Icon to your privacy
>> policy it says the equivalent of “No matter what the rest of this privacy
>>
>> policy says, the following is true and preempts anything else in this
>> document…”. The Privacy Icon makes an iron-clad guarantee about some
>> portion
>>
>> of how a company treats your data. For example, if a privacy policy
>> includes
>> the icon for “None of your data is sold or shared with 3rd parties”, then
>> no
>>
>> matter what the privacy policy says in the small print, it gets preempted
>> by
>> the icon and the company is legally bound to never sharing or selling your
>> data. Of course, the set of icons still needs to be decided.  Mozilla
>> held a workshop on the 27th of January 2010 to help decide these kinds
>> of questions, and will host further events in the future.
>>
>> == Lawyer Selected, Reader Approved ==
>>
>> Since its release, Creative Commons has continually pared down the number
>> of licenses it provides and is now down to just two icons, one with two
>> states and one with three. It has to be so simple because everyday people
>> choose their own license. Privacy Icons don’t have that constraint. A
>>
>> qualified lawyer chooses what icons to bind to their privacy policy, and
>> so
>> there can be substantially more icons to choose from allowing the creation
>> of a rich privacy story. As long as the icons are understandable by an
>> everyday person, we are golden.
>>
>> == Machine Readable ==
>>
>> Some of the attributes will have potentially a bad normative value, like
>> an
>> icon that indicates your data may be sold to third parties. The question
>> becomes, why would any company display such an icon in their privacy
>> policy?
>> Wouldn’t they instead opt to not use the Privacy Icons at all? This is the
>> largest problem facing the Privacy Icons idea. Aren’t we are creating an
>>
>> incentive system whereby good companies/services will display Privacy
>> Icons
>> and bad companies/services will not?
>>
>> If attributes become widely adopted then the correlation of good companies
>> using the icons and bad companies not using the icons becomes rather
>> strong.
>> If a privacy policy doesn’t include any icons it runs the risk of
>>
>> becoming synonymous with that policy making no guarantees for not
>> using your data for evil. The absence of Privacy Icons becomes
>> stigmatic.
>>
>> Asking people to notice the absence of something is asking the
>> implausible.
>> People don’t generally don’t notice an absence; just a presence. The
>>
>> solution hinges on Privacy Icons being machine readable and Firefox being
>> used by 400 million people world-wide. If Firefox encounters a privacy
>> policy that doesn’t have called-out attributes, we’ll automatically
>> display
>>
>> the icons with the poorest guarantees: your data may be sold to 3rd
>> parties, your data may be stored indefinitely, and your data may be turned
>> over to law enforcement without a warrant, etc. This way, companies are
>> incentivized to display these attributes and thereby be bound to
>> protecting
>> user privacy appropriately.
>>
>> == Attribute Strawperson ==
>>
>> As a strawperson, here are the 7 things that we believe matter most in
>> privacy:
>>
>> * Is you data used for secondary use? And is it shared with 3rd parties?
>> * Is your data bartered?
>> * Under what terms is your data shared with the government and with law
>> enforcement?
>> * Does the company take reasonable measures to protect your data in all
>> phases of collection and storage?
>> * Does the service give you control of your data?
>> * Does the service use your data to build and save a profile for
>> non-primary
>> use?
>> * Are ad networks being used and under what terms?
>>
>> === Secondary Use of Data ===
>>
>> *Is your data used for secondary use?* The European Union has spent time
>> codifying and refining the idea of “secondary use”; the use of data for
>>
>> something other than the purpose for which the collectee believes it was
>> collected. Mint.com uses your login information to import your financial
>> data from your banks — with your explicit permission. That’s primary use
>> and
>> shouldn’t be punished. The RealAge [4] tests poses as a cute questionnaire
>> and
>> then turns around and sells your data. That’s secondary use and is fishy.
>>
>> When you sign up to use a service you should care if your data will only
>> be
>> used for that service. If the service does use your data for secondary
>> use,
>> they should disclose those uses. If they share your data with 3rd parties,
>> then they should disclose that list too.
>>
>> === Bartering User Data ===
>>
>> *Is your data bartered?* You should know when someone is making a gain off
>> your back. You should also know roughly how and for what that data is
>> being
>> bartered.
>>
>> === Goverment and Law Enforcement as Third Parties ===
>>
>> *Under what terms is your data shared with the government and with law
>> enforcement?* Do they just hand it over without a warrant or a subpoena?
>>
>> === Private Data Storage Considerations ===
>>
>> *Does the company take reasonable measures to protect your data in all
>> phases of collection and storage.* There are numerous ways that your data
>> can be protected: from using SSL during transmission, to encryption on the
>> server, to deleting your data after it is no longer needed. Does the
>> company
>> protect your data during transmission, storage, and from employees? This
>> icon should tell you what the weak link is.
>>
>> === User Control of Data ===
>>
>> *Does the service give you control of your data?* Can you delete your data
>> if you choose? Can you edit it? What level of control do you have over the
>> data stored on their server.
>>
>> === User Profiles for Non-Primary Use ===
>>
>> *Does the service use your data to build and save a profile for
>> non-primary
>> use?* This is a subtle one, as we want to include the concept of PII
>> (personally identifiable information). What we are worried about are
>> companies secretly building a dossier on you — say by taking your email
>>
>> address and then buying more information from a 3rd party about that email
>> address to get, say, your credit rating. Then using that profile for uses
>> with which you haven’t agreed.
>>
>>
>> === Use of Advertising Networks ===
>>
>> *Are ad networks being used and under what terms?* On the web most pages
>> include ads of some form, and the prevalence of behavioral tracking is on
>> the rise.  While letting users get a handle on ad networks is
>> important, raising the alarm on every page would be counter-productive. We
>> haven’t figured out yet how to handle ad networks and are looking for more
>>
>> thought here.
>>
>> == Conclusion ==
>>
>> Mozilla believes that it is possible to have a small set of easily
>> understood icons that can represent actionable privacy choices to a
>> user, and is open to discussing standardizing these across user
>> agents.  Additionally, we should consider making these icons machine
>> readable, and discuss the best way to do so.
>>
>>
>> [1] http://creativecommons.org/
>> [2] http://www.w3.org/P3P/
>> [3] http://lorrie.cranor.org/
>> [4] http://www.zdnet.com/blog/healthcare/is-realage-a-scam/2040
>>
>>
>>  ___________________________________
>> Mischa Tuffield PhD
>> Email: mischa.tuffield@garlik.com
>> Homepage - http://mmt.me.uk/
>> Garlik Limited, 1-3 Halford Road, Richmond, TW10 6AW
>> +44(0)845 645 2824  http://www.garlik.com/
>> Registered in England and Wales 535 7233 VAT # 849 0517 11
>> Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10 9AD
>>
>>
>
> ___________________________________
> Mischa Tuffield PhD
> Email: mischa.tuffield@garlik.com
> Homepage - http://mmt.me.uk/
> Garlik Limited, 1-3 Halford Road, Richmond, TW10 6AW
> +44(0)845 645 2824  http://www.garlik.com/
> Registered in England and Wales 535 7233 VAT # 849 0517 11
> Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10 9AD
>
>

Received on Wednesday, 15 September 2010 12:54:35 UTC