W3C home > Mailing lists > Public > public-tracking@w3.org > October 2011

Re: ISSUE-5: What is the definition of tracking?

From: Sean Harvey <sharvey@google.com>
Date: Thu, 27 Oct 2011 12:05:06 -0400
Message-ID: <CAFy-vucjmTYN6w0KGo=2t3WeGF9iU38y=Qdr8_Yc7WHKrfbYUA@mail.gmail.com>
To: David Wainberg <dwainberg@appnexus.com>
Cc: Jonathan Mayer <jmayer@stanford.edu>, "public-tracking@w3.org Group WG" <public-tracking@w3.org>
Yes, to be clear my intent was that fingerprinting and all other forms of
individual identification -- whether personally identifiable or
pseudonymous -- would be captured here, regardless of the technology
involved.




On Thu, Oct 27, 2011 at 12:02 PM, David Wainberg <dwainberg@appnexus.com>wrote:

>
>
> On 10/25/11 9:17 PM, Jonathan Mayer wrote:
>
>> On Oct 25, 2011, at 2:13 PM, David Wainberg wrote:
>>
>>
>>> On 10/24/11 8:18 PM, Jonathan Mayer wrote:
>>>
>>>> A few responding thoughts below.
>>>>
>>>> Jonathan
>>>>
>>>>
>>>> I would strongly oppose limiting our definition of tracking to only
>>>> cover pseudonymously identified or personally identified data.  There are a
>>>> number of ways to track a user across websites without a single
>>>> pseudonymous or personal identifier.
>>>>
>>> I'm not sure what you mean here. Can you provide examples?
>>>
>> Any means of tracking that relies on fragmented or probabilistic
>> information.  For example, browser fingerprinting.  (See Peter Eckersley's
>> paper "How Unique Is Your Web Browser.")
>>
> Ah. I would have included that in pseudonymously identified, because if
> data is stored against it by the server, it will be stored against a hash
> or something based on the fingerprint.
>
>
>>  I'm very sympathetic to wanting to discuss the policy motivations
>>>> underlying the definitions we establish.  But I'm concerned that, in
>>>> practice, the discussion would be a rat hole for the working group.
>>>>  There's just too much material to cover, and there are some significant
>>>> differences of opinion that would take far longer to iron out in the
>>>> general case than in the context of specific definitions.  We trended
>>>> towards an unproductive general policy conversation in Cambridge, in some
>>>> measure at my prompting; in retrospect I think the co-chairs were wise to
>>>> move on.
>>>>
>>>>  Hard != unproductive. This standard seems to be 10% tech and 90%
>>> policy. How will we develop rational policy without exploring the
>>> underlying policy rationale?
>>>
>> Our various definitions will explore the underlying policy - just as
>> applied to specific issues, not in the general case.  Experience already
>> suggests that's a much more efficient approach.
>>
>> As for whether the standard will be completely consistent with an
>> underlying policy rationale: I think it should be, and it ultimately
>> largely will be.  That's not incompatible with moving issue-by-issue.
>>
> I disagree, but since I'm at risk of turning into a broken record on this
> point, I'll drop it for now.
>
>
>>  I don't follow this point.  The first party vs. third party distinction
>>>> has, in my understanding, been an attempt to carefully define the sort of
>>>> organizational boundaries that give rise to privacy concerns.  I haven't
>>>> viewed the definition as a shortcut in any sense - it does a lot of work.
>>>>
>>> Can you elaborate on how organizational boundaries give rise to privacy
>>> concerns? I'm not saying they don't; I'm genuinely interested to see it
>>> spelled out.
>>>
>>
>> Organizational boundaries are a cornerstone of many areas of regulatory
>> law and policy.  They enable market signals, consumer choices, business
>> pressures, and government enforcement for countless product qualities.
>>  Organizational boundaries are particularly important for online privacy:
>> organizations have widely varying incentives surrounding user data, and
>> user data is very easy to use and copy.  One of the most effective privacy
>> choices available to a consumer - which turns up in countless privacy
>> regulations in the U.S. and elsewhere - is a limit on which organizations
>> have unfettered access to their data in the first place.
>>
> Agreed. However, the organizational boundaries seem most relevant in this
> case for attaching liability. I'm not convinced that, for DNT,
> organizational boundaries map well to the privacy risks associated with the
> type of data in question. Yes, "who do I trust", is an important
> consideration for users, but an extremely difficult metric to evaluate and
> manage on the internet. And more important is "who do I trust to do what,"
> which is way harder for users to get their heads around. To me a simpler
> and more effective approach is built on, e.g., the type of data, how long
> it's retained, and the nature of other data it can be combined with,
> regardless of whose hands it's in.
>
>


-- 
Sean Harvey
Business Product Manager
Google, Inc.
212-381-5330
sharvey@google.com
Received on Thursday, 27 October 2011 16:05:42 UTC

This archive was generated by hypermail 2.3.1 : Friday, 3 November 2017 21:44:41 UTC