Re: Third-Party Web Tracking: Policy and Technology Paper outlining harms of tracking

Hi Joe ­ Comments in line.



From:  Joseph Lorenzo Hall <joe@cdt.org>
Date:  Thursday, October 11, 2012 2:46 PM
To:  Alan Chapell <achapell@chapellassociates.com>
Cc:  "<public-tracking@w3.org>" <public-tracking@w3.org>, Jonathan Mayer
<jmayer@stanford.edu>
Subject:  Re: Third-Party Web Tracking: Policy and Technology Paper
outlining  harms of tracking

> Hi Alan, I don't mean to pile on or seem confrontational...

I'm not taking it that way at all. I wish more of your colleagues were more
wiling and able to articulate their points on these topics here.

> 
> But for those of us who have a background in privacy theory and scholarship,
> "show us the harms" comes up often, and I don't think it's exactly the right
> question.
> 
> Privacy implications/intrusions are often emotional, intangible and subject to
> considerable variation amongst individuals. I think the Berkeley survey
> write-up makes a number of key points: at least in the US, the FTC has had to
> police privacy issues on a piecemeal basis, and the increasing collection of
> data about the web-surfing public coupled with problems in protecting and in
> some cases exploiting that information (cite to settlements/actions) means we
> really need an effective way to allow users to signal that they don't want
> this collection and subsequent implications (with some narrow common-sense
> exceptions)

Here's the issue I have with the 'etherial harm' argument in this context.
If your concern is that data collection will ultimately result in a  "big
brother' scenario, then it logically follows that you shouldn't want ANY
data collected about consumer Internet usage. Over time, that type of data
collection is just as likely to create the etherial harms you and others
continue to speak about. As a working group, it seems like we have moved
away from that approach. You may disagree with that larger approach, but
that seems to be where we're heading.

Our current approach in this working group has been to enumerate certain
Permitted Uses ‹ which is sort of a leap of faith. Meaning, we're trusting
the entity in question not to use that information for a purpose outside of
the scope of the Permitted Use. (e.g., Can't use data collected for security
purposes for ad targeting when DNT is on)

So if we're going to allow SOME data collection, then by definition ­ other
types of reasonable Permitted Uses should be presumptively allowed so long
as: a) There is a legitimate need for this information, b) the data holder
promises to use it for the intended purposes, and c) there isn't any
demonstrable harm in allowing the Permitted Use.

Hence, I would respectfully suggest that it seems inappropriate for folks to
continually cite etherial ­ but hard to demonstrate in real world settings ­
privacy harms in this context when there has been a clearly articulated and
legitimate business use case.



> . It certainly is complicated in the bigger, global picture ... and I'd like
> to think we we can design something that effectively does that. And despite
> defaults or not, many of us will be educating users about these tools (and how
> to grant exceptions and what that means) when the spec gets adopted.
> 
> best, Joe
> 
> --
> Joseph Lorenzo Hall
> Senior Staff Technologist
> Center for Democracy & Technology
> https://www.cdt.org/
> 
> On Oct 10, 2012, at 16:55, Alan Chapell <achapell@chapellassociates.com>
> wrote:
> 
>> Hi Jonathan - 
>> 
>> In addition to my questions below, I'm curious whether your research has
>> documented specific examples of these harms occurring in the real world?
>> 
>> Thanks again,
>> 
>> Alan
>> 
>> From:  Alan Chapell <achapell@chapellassociates.com>
>> Date:  Saturday, October 6, 2012 5:14 AM
>> To:  <public-tracking@w3.org>, Jonathan Mayer <jmayer@stanford.edu>
>> Subject:  Third-Party Web Tracking: Policy and Technology Paper outlining
>> harms of tracking
>> 
>>> Hi Jonathan - 
>>> 
>>> A few days ago, you invited me (via IRC) to review your recent paper which ­
>>> among other items ­ outlines some of the potential harms of tracking. (See
>>> https://www.stanford.edu/~jmayer/papers/trackingsurvey12.pdf)
>>> 
>>> Thanks ­ As you may have noticed, I've been asking a number of folks in the
>>> WG for examples of harms and haven't received very much information in
>>> response. So I want to applaud your effort to help provide additional
>>> information and to facilitate a dialog. That said, I want to make sure I
>>> understand your thinking here ­ or at least help clarify some of the
>>> distinctions you may be drawing.
>>> 
>>> I'm curious whether your position is that those harms are equally apparent
>>> in a first party setting ­ where a first party utilizes their own data for
>>> ad targeting across the internet? For example, in your scenario where "an
>>> actor that causes harm to a consumer." Is that not also possible in a first
>>> party context? Does the first party not have both "the means", "the access"
>>> and at least potentially, the ability to take the  "action" that causes the
>>> harms you lay out? (e.g., "Publication, a less favorable offer, denial of a
>>> benefit, or termination of employment. Last, a particular harm that is
>>> inflicted. The harm might be physical, psychological, or economic.")
>>> Do you believe that a direct relationship between consumers and first party
>>> websites completely mitigates that risk of harm ­ even where the first
>>> parties have significant stores of personally identifiable data?
>>> 
>>> 
>>> Has your position evolved over the past few months? Correct me if I'm
>>> mistaken, but I believe that one of the proposals offered by Mozilla /
>>> Stanford and EFF sought to address forms of first party tracking. Do I have
>>> that correct?
>>> 
>>> Thanks ­ I look forward to hearing your thoughts.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Excerpt from your paper for the convenience of others.
>>> 
>>> 
>>> "When considering harmful web tracking scenarios, we find it helpful to
>>> focus on four variables. First, an actor that causes harm to a consumer. The
>>> actor might, for example, be an authorized employee, malicious employee,
>>> competitor, acquirer, hacker, or government agency. Second, a means of
>>> access that enables the actor to use tracking data. The data might be
>>> voluntarily transferred, sold, stolen, misplaced, or accidentally
>>> distributed. Third, an action that harms the consumer. The action could be,
>>> for example, publication, a less favorable offer, denial of a benefit, or
>>> termination of employment. Last, a particular harm that is inflicted. The
>>> harm might be physical, psychological, or economic.
>>> The countless combinations of these variables result in countless possible
>>> bad outcomes for consumers. To ex- emplify ourthinking, here is one commonly
>>> considered scenario: A hacker (actor) breaksinto a tracking company (means
>>> of access) and publishes its tracking information (action), causing some
>>> embarrassing fact about the consumer to become known and inflicting
>>> emotional distress (harm).9
>>> Risks associated with third-party tracking are heightened by the lack of
>>> market pressure to exercise good security and privacy practices. If a
>>> first-party website is untrustworthy, users may decline to visit it. But,
>>> since users are unaware of the very existence of many third-party websites,
>>> they cannot reward responsible sites and penalize irresponsible sites.10"
>>> 
>>> 

Received on Thursday, 11 October 2012 19:32:16 UTC