- From: Lauren Gelman <gelman@blurryedge.com>
- Date: Wed, 15 Feb 2012 11:10:24 -0800
- To: Kevin Smith <kevsmith@adobe.com>
- Cc: Shane Wiley <wileys@yahoo-inc.com>, Jonathan Mayer <jmayer@stanford.edu>, Justin Brookman <jbrookman@cdt.org>, "public-tracking@w3.org" <public-tracking@w3.org>
- Message-Id: <91ECCD65-00BA-486D-AB1A-7EDBADB31254@blurryedge.com>
Without taking a position on how sweeping the changes are, there are tons of examples where companies implement measures to create audit trails in their operations to increase transparency. They do it because it is imposed by regulators, legislation, court order, etc. They do it because it's considered a good business practice for the industry to bring up the quality and trust overall in a business method. In this field, many, many companies pay Truste to tell them to implement changes that increase transparency. Complying with technical standards at least doesn't come with a bill. On Feb 15, 2012, at 8:53 AM, Kevin Smith wrote: > I actually believe Shane is significantly understating the expense. Any type of sweeping change requires vast amounts of development, testing, beta testing, and maintenance because you lose years of stability. > > Furthermore, it is completely unrealistic to try to justify requiring sweeping expensive changes because you don’t trust a large company not to lie to you and secretly compile profiles after they have stated that they will not. This is a technical standard, which means it is the common denominator of what companies are willing to implement. Why would a company implement expensive changes so that they cannot do what they are not doing anyway? Even SHOULD is far too strong of terminology here. I would not be comfortable with anything stronger than MAY. > > > From: Shane Wiley [mailto:wileys@yahoo-inc.com] > Sent: Friday, February 10, 2012 11:27 AM > To: Jonathan Mayer > Cc: Justin Brookman; public-tracking@w3.org > Subject: RE: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) > > Jonathan, > > Moving an entire architecture that is cookie based to one that is IP + User Agent based is not trivial and would require changes at all tiers (hosting servers, operational servers, data warehousing systems, reporting, security, all scripts and coding logic for system interoperability, etc.). When I quoted the timelines I was being serious. It’s a significant and fundamental change across the board. And while some ad networks may use protocol information for “operational uses” they probably also use cookies. So removing cookies from the equation would have significant issues for them as well – again, across the board. > > I don’t believe I’m “over estimating” the effort for effect. > > Side Note 1: I believe there is another Working Group focused on Online Identity (perhaps not W3C though – I’ll try to track this down). I mention this as it goes back to my earlier comments on not attempting to solve all online privacy issues in a single working group. It’s unfortunate the charter of this working group has been so broadly interpreted by some as that appears to be where much of the churn is in our efforts. If our focused was constrained to “profiling” and uses of “profiling”, I believe we’d be MUCH further along. > > Side Note 2: I believe the truth of our current situation is somewhere between Mike’s email and that our disagreements are localized to just a few issues (as you’ve stated). The operational purpose exceptions and implementation cost are so core to the discussion (and the on-going ability for many web based companies to monetize their efforts) AND appear to be incredibly divisive as to render our progress halted at this time (akin to “going in circles” versus making incremental steps forward). Purely my opinion… > > - Shane > > From: Jonathan Mayer [mailto:jmayer@stanford.edu] > Sent: Friday, February 10, 2012 10:46 AM > To: Shane Wiley > Cc: Justin Brookman; public-tracking@w3.org > Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) > > Shane, > > Could you give a bit more explanation of how this would "require massive re-architecture of most internal systems"? As I understand it, some advertising networks already use protocol information for "operational uses." For those companies that don't, a quick implementation would be to just hash IP address + User-Agent string and treat that as an identifier. I don't mean to excessively trivialize the implementation burden, but it seems to me much lesser than other alternatives on the table (save, of course, business as usual). > > As for objections to fingerprinting, I want to be clear that the idea I'm floating is passive fingerprinting, not active fingerprinting. Passive fingerprinting leverages information that we would already allow companies to collect—no more. > > Jonathan > > On Feb 10, 2012, at 9:34 AM, Shane Wiley wrote: > > > Jonathan, > > I believe this could be a “SHOULD” goal because of two core factors: > > 1. This approach will require massive re-architecture of most internal systems (several year effort for a large company – months to years for mid-size companies – may be too complex for small companies until native platforms come built with this and they can upgrade), and > 2. There are perhaps larger privacy issues here with the use of Digital Fingerprints. Some advocates (you don’t appear to be with them) believe that a cookie is a better tool than a Digital Fingerprint as consumers have control of cookies – whereas with a Digital Fingerprint they do not (at least not in a simple, native tool perspective). I’m personally on the side of Cookies as I believe the control factor and the wealth of automated tools for blocking and purging them is a better outcome for consumers than are Digital Fingerprints. > > Side Note: Digital Fingerprints are argued by some vendors to be far more effective for tracking due to the lack of consumer control and the realities of cookie churn. > > - Shane > > From: Jonathan Mayer [mailto:jmayer@stanford.edu] > Sent: Friday, February 10, 2012 10:16 AM > To: Justin Brookman > Cc: public-tracking@w3.org > Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) > > Thinking more about tracking through IP address + User-Agent string, it occurs to me that the greatest challenges are stability over time and across locations. For some of the "operational uses" we have discussed, time- and geography- limited tracking may be adequate. Scoping the "operational use" exceptions to protocol data would somewhat accommodate those uses without allowing for new data collection, and it would be easier to implement than a client-side privacy-preserving technology. Thoughts on whether this is a possible new direction for compromise? > > Jonathan > > On Feb 10, 2012, at 8:30 AM, Jonathan Mayer wrote: > > > > Justin, > > I think you may be misreading the state of research on tracking through IP address + User-Agent string. There is substantial evidence that some browsers can be tracked in that way some of the time. I am not aware of any study that compares the global effectiveness of tracking through IP address + User-Agent string vs. an ID cookie; intuitively, the ID cookie should be far more effective. The news story you cite glosses over important caveats in that paper's methodology; it is certainly not the case that "62% of the time, HTTP user-agent information alone can accurately tag a host." > > Jonathan > > On Feb 9, 2012, at 6:48 PM, Justin Brookman wrote: > > > > Sure. As the spec current reads, third-party ad networks are allowed to serve contextual ads on sites even when DNT:1 is on, yes? In order to do this, they're going to get log data, user agent string, device info, IP address, referrer url, etc. There is growing recognition that that information in and of itself can be used to uniquely identify devices over time (http://www.networkworld.com/news/2012/020212-microsoft-anonymous-255667.html) for profiling purposes. It was my understanding that one of the primary arguments against allowing third parties to place unique identifiers on the client was because of the concern that they were going to be secretly tracking and building profiles using those cookies. My point is that they will be able to do that regardless, with little external ability to audit. This system is going to rely to some extent on trust unless we are proposing to fundamentally rearchitecture the web. > > The other argument that I've heard against using unique cookies for this purpose is valid, though to me less compelling: that even if just used for frequency capping, third parties are going to be able to amass data about the types of ads a device sees, from which you could surmise general information about the sites visited on that device (e.g., you are frequency capping a bunch of sports ads --> ergo, the operator of that device probably visiting sports pages). Everyone seems to agree that it would be improper for a company to use this information to profile (meta-profile?), but there are still concerns about data breach, illegitimate access, and government access of this potentially revealing information. This concerns me too, but the shadow of my .url stream is to me considerably less privacy sensitive than my actual .url stream. I could be willing to compromise on a solution that allowed for using cookies for frequency capping, if there was agreement on limiting to reasonable campaign length, rules against repurposing, and a requirement to make an accountable statement of adherence to the standard. I would be interested to hear if it would be feasible to not register frequency caps for ads for sensitive categories of information (or if at all, cap client-side), though again, it's important to keep in mind that that data may well be collected and retained for other excepted purposes under the standard (e.g., fraud prevention) --- cookie or not. > From: Jonathan Mayer [mailto:jmayer@stanford.edu] > To: Justin Brookman [mailto:justin@cdt.org] > Cc: public-tracking@w3.org > Sent: Thu, 09 Feb 2012 18:32:19 -0500 > Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) > > Justin, could you explain what you mean here? > > Thanks, > Jonathan > > On Feb 9, 2012, at 3:17 PM, Justin Brookman wrote: > > > the standard currently recognizes that third parties are frequently going to be allowed to obtain uniquely-identifying user agent strings despite the presence of a DNT:1 header > > Lauren Gelman BlurryEdge Strategies 415-627-8512 gelman@blurryedge.com http://blurryedge.com
Received on Wednesday, 15 February 2012 19:10:58 UTC