- From: Shane Wiley <wileys@yahoo-inc.com>
- Date: Wed, 8 Feb 2012 06:41:47 -0800
- To: Rigo Wenning <rigo@w3.org>, "public-tracking@w3.org" <public-tracking@w3.org>
- CC: Jonathan Mayer <jmayer@stanford.edu>
Rigo, Please see my comments below in [ ]: -----Original Message----- From: Rigo Wenning [mailto:rigo@w3.org] Sent: Wednesday, February 08, 2012 5:48 AM To: public-tracking@w3.org Cc: Shane Wiley; Jonathan Mayer Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) Shane, I think you raise a very legitimate question. Because the risk will determine how much countermeasure we need and how much burden is seen as necessary/acceptable. The lack of a clear attacking scenario doesn't make our task easier. Could you contact your legal department and ask about the amount of subpoenae, warrants and other accesses to information? But they will also tell you that some of those can't be published. [As you suspected, I'm unable to share specific information in this area for a number of reasons - many of them legal. I understand some in this group would like to limit data collection to circumvent local legal process but I don't believe that is a real risk in today's world (more of a perceived risk than a actualized one).] Another unsolved issue of use limitations is that they may disappear when a company who committed to "not use" runs out of business and assets and information are taken up by independent third parties without those "use- limitations". [Please provide a single example of where this has occurred around cross-site data attached to an anonymous cookie ID? All the examples I can think of involved transactional data (such as in the Borders case).] I think there are many cases where profile information let to increase in insurance fees and similar bad consequences for users. [Please provide a single example of where this has occurred. This is a figment of consumer advocate imagination and the DAA recently released requirements to place these imaginary risks out of the scope of legitimacy.] So I think it is not without merit to talk about certain limitations in retention or collection here. [I disagree - no one has been able to provide an example of a real and actualized threat to date.] Furthermore, I think we have also some psychological component: A use limitation is solely within the hands of the trackers. This means there is a "trust-component" to the fact that a service is actually not using the data collected for frequency capping to optimize the ads "just a bit". So if DNT=1 is reduced to just a promise that is hard to verify, the value of the signal isn't that big anymore as nobody is seen to have done anything. The only thing we see is a promise not to do something without a clear expiry date. [Agreed that "trust" is a factor here and there is a bias to imagine all actors as bad actors. There will be bad actors that will knowingly and maliciously state they are DNT compliant when they are not. This is a fact of life. I don't believe it's appropriate to place considerable burden on good actors in light of this reality to somehow bring balance to the situation. We're punishing those that are working towards doing the right thing with incredibly complex and expensive implementation guidelines and not impacting the reality of bad actors at all.] Is that helping your understanding? As indicated in other email, remedies include collection limitations and strict retention limitations aside the pure use limitation. So in light of our definition of exemption and exception, frequency capping is an exception, not an exception. And if the exception is so vague that the default of the general rule becomes the minor case, it is - by definition- not an exception anymore, but an exemption that may be so broad that it questions the overall investment into DNT. This not to criticize you, but to get a feeling on why we are having a conflict between the parties here. [This has not advanced my understanding of the topic being discussed unfortunately. I continue to believe that use limitations are the most appropriate path, in balance to the real risks, to the successful implementation of DNT across the globe and all 1st and 3rd parties that rely on advertising as their sole or majority form of revenue.] Best, Rigo On Tuesday 07 February 2012 21:52:01 Shane Wiley wrote: > #7 - What privacy risks? When we remove the use of cross-site data > collection to modify a user's experience, can you state a few examples of > real-world harms in this area? Is there an example where this information > was used in legal proceeding? What is the rate of misuse of this > information in relation to its presence today? Anyone can play the "what > if" game, but I'd ask that you provide real-world examples where anonymous > information that could have been used for online behavioral advertising has > ever been used to harm a user in some other context.
Received on Wednesday, 8 February 2012 14:46:06 UTC