Re: ISSUE-5: What is the definition of tracking?

On 10/25/11 9:17 PM, Jonathan Mayer wrote:
> On Oct 25, 2011, at 2:13 PM, David Wainberg wrote:
>
>>
>> On 10/24/11 8:18 PM, Jonathan Mayer wrote:
>>> A few responding thoughts below.
>>>
>>> Jonathan
>>>
>>>
>>> I would strongly oppose limiting our definition of tracking to only cover pseudonymously identified or personally identified data.  There are a number of ways to track a user across websites without a single pseudonymous or personal identifier.
>> I'm not sure what you mean here. Can you provide examples?
> Any means of tracking that relies on fragmented or probabilistic information.  For example, browser fingerprinting.  (See Peter Eckersley's paper "How Unique Is Your Web Browser.")
Ah. I would have included that in pseudonymously identified, because if 
data is stored against it by the server, it will be stored against a 
hash or something based on the fingerprint.
>
>>> I'm very sympathetic to wanting to discuss the policy motivations underlying the definitions we establish.  But I'm concerned that, in practice, the discussion would be a rat hole for the working group.  There's just too much material to cover, and there are some significant differences of opinion that would take far longer to iron out in the general case than in the context of specific definitions.  We trended towards an unproductive general policy conversation in Cambridge, in some measure at my prompting; in retrospect I think the co-chairs were wise to move on.
>>>
>> Hard != unproductive. This standard seems to be 10% tech and 90% policy. How will we develop rational policy without exploring the underlying policy rationale?
> Our various definitions will explore the underlying policy - just as applied to specific issues, not in the general case.  Experience already suggests that's a much more efficient approach.
>
> As for whether the standard will be completely consistent with an underlying policy rationale: I think it should be, and it ultimately largely will be.  That's not incompatible with moving issue-by-issue.
I disagree, but since I'm at risk of turning into a broken record on 
this point, I'll drop it for now.
>
>>> I don't follow this point.  The first party vs. third party distinction has, in my understanding, been an attempt to carefully define the sort of organizational boundaries that give rise to privacy concerns.  I haven't viewed the definition as a shortcut in any sense - it does a lot of work.
>> Can you elaborate on how organizational boundaries give rise to privacy concerns? I'm not saying they don't; I'm genuinely interested to see it spelled out.
>
> Organizational boundaries are a cornerstone of many areas of regulatory law and policy.  They enable market signals, consumer choices, business pressures, and government enforcement for countless product qualities.  Organizational boundaries are particularly important for online privacy: organizations have widely varying incentives surrounding user data, and user data is very easy to use and copy.  One of the most effective privacy choices available to a consumer - which turns up in countless privacy regulations in the U.S. and elsewhere - is a limit on which organizations have unfettered access to their data in the first place.
Agreed. However, the organizational boundaries seem most relevant in 
this case for attaching liability. I'm not convinced that, for DNT, 
organizational boundaries map well to the privacy risks associated with 
the type of data in question. Yes, "who do I trust", is an important 
consideration for users, but an extremely difficult metric to evaluate 
and manage on the internet. And more important is "who do I trust to do 
what," which is way harder for users to get their heads around. To me a 
simpler and more effective approach is built on, e.g., the type of data, 
how long it's retained, and the nature of other data it can be combined 
with, regardless of whose hands it's in.

Received on Thursday, 27 October 2011 16:03:01 UTC