Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49)


I think you raise a very legitimate question. Because the risk will determine 
how much countermeasure we need and how much burden is seen as 
necessary/acceptable. The lack of a clear attacking scenario doesn't make our 
task easier. 

Could you contact your legal department and ask about the amount of subpoenae, 
warrants and other accesses to information? But they will also tell you that 
some of those can't be published. 

Another unsolved issue of use limitations is that they may disappear when a 
company who committed to "not use" runs out of business and assets and 
information are taken up by independent third parties without those "use-

I think there are many cases where profile information let to increase in 
insurance fees and similar bad consequences for users. 

So I think it is not without merit to talk about certain limitations in 
retention or collection here. 

Furthermore, I think we have also some psychological component:  A use 
limitation is solely within the hands of the trackers. This means there is a 
"trust-component" to the fact that a service is actually not using the data 
collected for frequency capping to optimize the ads "just a bit". So if DNT=1 
is reduced to just a promise that is hard to verify, the value of the signal 
isn't that big anymore as nobody is seen to have done anything. The only thing 
we see is a promise not to do something without a clear expiry date.

Is that helping your understanding? As indicated in other email, remedies 
include collection limitations and strict retention limitations aside the pure 
use limitation. So in light of our definition of exemption and exception, 
frequency capping is an exception, not an exception. And if the exception is 
so vague that the default of the general rule becomes the minor case, it is -
by definition- not an exception anymore, but an exemption that may be so broad 
that it questions the overall investment into DNT. This not to criticize you, 
but to get a feeling on why we are having a conflict between the parties here.



On Tuesday 07 February 2012 21:52:01 Shane Wiley wrote:
> #7 - What privacy risks?  When we remove the use of cross-site data
> collection to modify a user's experience, can you state a few examples of
> real-world harms in this area?  Is there an example where this information
> was used in legal proceeding?  What is the rate of misuse of this
> information in relation to its presence today?  Anyone can play the "what
> if" game, but I'd ask that you provide real-world examples where anonymous
> information that could have been used for online behavioral advertising has
> ever been used to harm a user in some other context.

Received on Wednesday, 8 February 2012 13:50:25 UTC