- From: Jonathan Mayer <jmayer@stanford.edu>
- Date: Tue, 25 Oct 2011 18:17:44 -0700
- To: David Wainberg <dwainberg@appnexus.com>
- Cc: Sean Harvey <sharvey@google.com>, "public-tracking@w3.org Group WG" <public-tracking@w3.org>
On Oct 25, 2011, at 2:13 PM, David Wainberg wrote: > > > On 10/24/11 8:18 PM, Jonathan Mayer wrote: >> A few responding thoughts below. >> >> Jonathan >> >> >> I would strongly oppose limiting our definition of tracking to only cover pseudonymously identified or personally identified data. There are a number of ways to track a user across websites without a single pseudonymous or personal identifier. > I'm not sure what you mean here. Can you provide examples? Any means of tracking that relies on fragmented or probabilistic information. For example, browser fingerprinting. (See Peter Eckersley's paper "How Unique Is Your Web Browser.") >> I'm very sympathetic to wanting to discuss the policy motivations underlying the definitions we establish. But I'm concerned that, in practice, the discussion would be a rat hole for the working group. There's just too much material to cover, and there are some significant differences of opinion that would take far longer to iron out in the general case than in the context of specific definitions. We trended towards an unproductive general policy conversation in Cambridge, in some measure at my prompting; in retrospect I think the co-chairs were wise to move on. >> > Hard != unproductive. This standard seems to be 10% tech and 90% policy. How will we develop rational policy without exploring the underlying policy rationale? Our various definitions will explore the underlying policy - just as applied to specific issues, not in the general case. Experience already suggests that's a much more efficient approach. As for whether the standard will be completely consistent with an underlying policy rationale: I think it should be, and it ultimately largely will be. That's not incompatible with moving issue-by-issue. >> I don't follow this point. The first party vs. third party distinction has, in my understanding, been an attempt to carefully define the sort of organizational boundaries that give rise to privacy concerns. I haven't viewed the definition as a shortcut in any sense - it does a lot of work. > Can you elaborate on how organizational boundaries give rise to privacy concerns? I'm not saying they don't; I'm genuinely interested to see it spelled out. Organizational boundaries are a cornerstone of many areas of regulatory law and policy. They enable market signals, consumer choices, business pressures, and government enforcement for countless product qualities. Organizational boundaries are particularly important for online privacy: organizations have widely varying incentives surrounding user data, and user data is very easy to use and copy. One of the most effective privacy choices available to a consumer - which turns up in countless privacy regulations in the U.S. and elsewhere - is a limit on which organizations have unfettered access to their data in the first place.
Received on Wednesday, 26 October 2011 01:18:22 UTC