- From: Kevin Smith <kevsmith@adobe.com>
- Date: Thu, 9 Feb 2012 13:23:43 -0800
- To: Jonathan Mayer <jmayer@stanford.edu>
- CC: "public-tracking@w3.org (public-tracking@w3.org)" <public-tracking@w3.org>
- Message-ID: <6E120BECD1FFF142BC26B61F4D994CF3064CAB4BF5@nambx07.corp.adobe.com>
You are right. Impasses require compromise. Perhaps retention based exceptions would be a good starting place for compromise. From: Jonathan Mayer [mailto:jmayer@stanford.edu] Sent: Thursday, February 09, 2012 12:51 PM To: Kevin Smith Cc: public-tracking@w3.org (public-tracking@w3.org) Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) Some advertising companies won't budge on "operational uses," and some privacy advocates won't accept use-based exceptions. I don't see another way to resolve this impasse. If you do, I'm listening. Jonathan On Feb 9, 2012, at 11:27 AM, Kevin Smith wrote: I think it's time for us to move on from this topic. I have heard at least 3 of the largest internet advertising companies in the world say that this in not currently a viable option. Whether they are wrong or right, it would seem unwise for the W3C to put in a standard something that the world's leading technologies claim would not scale without stronger evidence that the proposed technology is indeed viable and would scale adequately. I think the best we can do is put in the notes somewhere that there are current efforts to develop technologies that would move more of the operational functions to the client and that there is at least one promising solution currently being tested. From: JC Cannon [mailto:jccannon@microsoft.com] Sent: Tuesday, February 07, 2012 12:29 PM To: Mike Zaneis; Shane Wiley; Alan Chapell; Jonathan Mayer; Sean Harvey Cc: Matthias Schunter; Jeffrey Chester; public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>) Subject: RE: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) Client-side solutions always look great in a lab or when working with a small number of sites. However, to be successful they have to be deployed to tens of millions of clients, require changes to millions of sites and still keep the old systems in place; increasing support costs and decreasing efficiency. Yes, Microsoft research has investigated several solutions that show promise. But we are still a ways away from a solution that is easy to deploy and use. Let me say I will be the first to jump up and down and push for adoption once we find that solution that clears those two hurdles. We have yearly research summits focusing on these types of technologies and will continue to search for a solution. I don't feel the W3C is the right place to try and define them or place them in to the DNT standard. Respectfully, JC From: Mike Zaneis [mailto:mike@iab.net] Sent: Monday, February 06, 2012 1:58 PM To: Shane Wiley; Alan Chapell; Jonathan Mayer; Sean Harvey Cc: Matthias Schunter; Jeffrey Chester; public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>) Subject: RE: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) I would like to echo Alan's argument about small publisher considerations here. This was my point in Brussels, that we cannot ask millions of websites (certainly well over the conservative estimate Shane gave of approximately 1 million) to reconfigure their servers to meet the W3C standard. While this concern was largely brushed off as a mere "educational" task by some in attendance, we cannot fool ourselves into believing it is an achievable goal. The reason the DAA program has been able to achieve well over 90% market participation in the United States is because we leverage the ad networks to deliver increased transparency and choice instead of focusing on publishers. I fear putting the onus on publishers will relegate any standard to the trash heap of P3P and other failed projects. Mike Zaneis SVP & General Counsel Interactive Advertising Bureau (202) 253-1466 Follow me on Twitter @mikezaneis From: Shane Wiley [mailto:wileys@yahoo-inc.com] Sent: Monday, February 06, 2012 10:06 AM To: Alan Chapell; Jonathan Mayer; Sean Harvey Cc: Matthias Schunter; Jeffrey Chester; public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>) Subject: RE: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) I understand the desire for technology oriented solutions - as those remove much of the "trust" factor necessary for user based prohibitions or exceptions within the standard. As we have ~1 million web sites around the globe that will need to implement DNT, ranging from the very large (disproportionate share of user engagement) to the very small (small bloggers who are reliant on ad revenue to pay their mortgage and will feel the impact of revenue loss more acutely), a more balanced approach will be necessary for DNT to be successful. The alternative is to force an extremist, technology-only solution on sites across the globe and quickly repeat the failure of P3P (and risk even large websites from deploying this version of DNT). I personally want W3C's process to be hugely successful and for DNT to be implemented by every website across the globe. To do this, we'll need to find a balance with operational purpose exceptions that provide for a degree of "trust" initially for website operators. This trust can be further tested through self-regulatory efforts to ensure the appropriate protections have been put in place to solidly separate business-as-usual, basic operations from profiling efforts (self-attestations, privacy policy postings, external audits, etc.). I appreciate the stance both sides of the philosophical divide are taking to cement their respective perspectives. I believe industry participants have done an incredibly good job of seeking an acceptable middle-point to the situation as a starting point (I'm biased of course). If Jonathan's approach is the end of the compromise process, I'm afraid W3C's DNT is DOA. Hopefully the conversation will continue from here and we'll figure out a way to start at use-based exceptions and develop a process by which technology solutions become the best practice as soon as they're available at scale (and in a manner all privacy advocates would agree do not harm consumer privacy). - Shane From: Alan Chapell [mailto:achapell@chapellassociates.com] Sent: Monday, February 06, 2012 8:44 AM To: Jonathan Mayer; Sean Harvey Cc: Matthias Schunter; Jeffrey Chester; public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>) Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) Thanks Jonathan. I'll take a look at donottrack.us<http://donottrack.us>. I was merely reacting to your assertion that client-side frequency capping would work everywhere because it worked for a small game network. Perhaps I misunderstood your point on this issue when you raised it in Brussels. Regarding my larger point - I'm concerned that this group may not have adequate representation from the long tail of industry. And while our seven invited experts certainly bring a significant level of talent and experience, it does not appear that their collective expertise extends into the the business side of the long tail. Given that one of the chief criticisms of the P3P standard was that its complexity made it difficult to implement correctly, I think its worth asking the question whether or not we are at risk of repeating those mistakes. I can't speak to wether or not MSFT or Google's technology teams will be able to implement this standard - but I can say they have significantly more technology resources than a mid-sized publisher or ad network. When Commissioner Neelie Kroes makes her decision whether or not to support this framework, the ability of small to mid-sized digital companies located in the EU to implement would be a consideration in that decision. It seems like this is worth raising as a formal issue - please let me know if any of you disagree. Cheers, Alan Chapell Chapell & Associates 917 318 8440 From: Jonathan Mayer <jmayer@stanford.edu<mailto:jmayer@stanford.edu>> Date: Sun, 5 Feb 2012 14:54:04 -0800 To: Sean Harvey <sharvey@google.com<mailto:sharvey@google.com>> Cc: Matthias Schunter <mts@zurich.ibm.com<mailto:mts@zurich.ibm.com>>, Jeffrey Chester <jeff@democraticmedia.org<mailto:jeff@democraticmedia.org>>, "public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>)" <public-tracking@w3.org<mailto:public-tracking@w3.org>> Subject: Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) Resent-From: <public-tracking@w3.org<mailto:public-tracking@w3.org>> Resent-Date: Sun, 05 Feb 2012 22:57:20 +0000 My notions of "minimization" and "balancing" encompass consideration of alternatives to a blanket use-based exception. There are infinite possible exceptions for any particular business purpose, with countless permutations of collection and retention limits. Those limits could be as straightforward as a retention period; they could be as complex as a privacy-preserving alternative technology. As for client-side frequency capping and other privacy-preserving web technologies: my lab is far from alone in developing these alternatives. See the annotated bibliography on donottrack.us<http://donottrack.us> for some of the other work in the field. These approaches are not mere lab studies; much of the finest work has been done by Microsoft Research, using data and technology from deployed systems. I can't speak to what DoubleClick was capable of in 2007 and earlier, but I am very skeptical that these technologies are out of reach in 2012. All of that said, let's take the position you (and others) have articulated at face value: client-side privacy-preserving technologies won't work. Seeing as client-side storage is a fundamental component of just about *any* privacy-preserving system, then all we're left with are unique ID cookies. The balance is, then, between frequency capping (where there is undoubtedly some economic value) and collection of a user's browsing activity across websites (the *central* concern in the Do Not Track debate for me and many others). As you rightly noted in Brussels, taking the balance seriously, that means no frequency capping for DNT users. And so, to circle back to privacy-preserving technologies: I am trying to extend an olive branch to the advertising industry representatives in the group. I am trying to find ways for you to accomplish your business aims while giving user privacy the deference it deserves. As between no frequency capping and an admittedly more challenging privacy-preserving frequency capping technology, I should imagine the latter is preferable. Jonathan On Feb 5, 2012, at 1:57 PM, Sean Harvey wrote: I want to comment on Jonathan's original email on this chain, in the context of his later response below. Jonathan's thoughts are in general well thought out. To my mind the main stumbling block is his elaboration of #5, which was titled "Minimization" but focused on the use of "privacy enhancing alternatives". In light of our both our meeting in Brussels and Jonathan's later post to this email chain, it's clear that Jonathan is speaking of his own personal version of client-side frequency capping, and so I feel forced to address this issue, though it seems tangential to our goals. To put it simply, client-side frequency capping does not work at scale. There were two separate initiatives at DoubleClick prior to its acquisition by Google that attempted to move functionality like frequency capping onto the client-side. Both looked nice when you did a little demo of them. But none of the worked at scale across a system -- like the ones that will be most directly impacted by these discussions -- that transact tens of billions of events per day. Discussing in further detail would be inappropriate in the context of this list because of proprietary technology concerns, but suffice it to say that client-side frequency capping and other such ad serving capabilities crap out at scale. This is not to say that Jonathan is not extremely intelligent or that his idea isn't a good one, but he does not have the hard experience of many years spent building & maintaining massively scaleable software systems that must never go down at risk of the financial viability of tens of thousands of businesses across the web. And we do have many other women & men who are every bit as intelligent running our ad serving & other systems. I am also unconvinced that retaining such data on the client side is a data privacy & security improvement for physical security reasons, because clients (e.g. browsers on laptops) are far more easily stolen than servers on data farms. While it's true that there would be no human readable values on the client side that an individual could leverage, the same remains true of the frequency cap ticks that are currently stored on the server-side. I think it is entirely valid & useful for us to discuss openly the merits of a frequency cap exception, but do not think it is legitimate for us to make potentially disastrous technical implementation requirements in the context of this W3C compliance process. sean On Sat, Feb 4, 2012 at 1:17 AM, Jonathan Mayer <jmayer@stanford.edu<mailto:jmayer@stanford.edu>> wrote: Here are a few exceptions that I believe could clear the hurdle. -Content serving -Contextual personalization -Outsourcing -Protocol logs for debugging -Unidentifiable data (including aggregated data and client-side frequency capping) -View fraud prevention through a stepped response On Feb 2, 2012, at 7:06 AM, Matthias Schunter wrote: > Hi Jonathan/Jeff, > > what exeptions do you see at this point that are likely to satisfy this > catalogue? > what are viable candidates where only more data/input/answers is needed? > > Regards, > matthias > > > > > |------------> > | From: | > |------------> >> -----------------------------------------------------------------------------------------------------------------------------------------| > |Jeffrey Chester <jeff@democraticmedia.org<mailto:jeff@democraticmedia.org>> | >> -----------------------------------------------------------------------------------------------------------------------------------------| > |------------> > | To: | > |------------> >> -----------------------------------------------------------------------------------------------------------------------------------------| > |Jonathan Mayer <jmayer@stanford.edu<mailto:jmayer@stanford.edu>>, | >> -----------------------------------------------------------------------------------------------------------------------------------------| > |------------> > | Cc: | > |------------> >> -----------------------------------------------------------------------------------------------------------------------------------------| > |"public-tracking@w3.org<mailto:public-tracking@w3.org> (public-tracking@w3.org<mailto:public-tracking@w3.org>)" <public-tracking@w3.org<mailto:public-tracking@w3.org>> | >> -----------------------------------------------------------------------------------------------------------------------------------------| > |------------> > | Date: | > |------------> >> -----------------------------------------------------------------------------------------------------------------------------------------| > |02/02/2012 03:34 PM | >> -----------------------------------------------------------------------------------------------------------------------------------------| > |------------> > | Subject: | > |------------> >> -----------------------------------------------------------------------------------------------------------------------------------------| > |Re: Deciding Exceptions (ISSUE-23, ISSUE-24, ISSUE-25, ISSUE-31, ISSUE-34, ISSUE-49) | >> -----------------------------------------------------------------------------------------------------------------------------------------| > > > > > > I agree with Jonathan's thoughtful discussion of the exemption issue. I > recognize this is a delicate matter, and it will require continued dialogue > to properly balance the goal's of DNT with traditional digital marketing > (and advertising generally) business practices. I believe that if we > follow Jonathan's outline, we can achieve our collective goals. > > Jeff > > On Feb 1, 2012, at 9:45 PM, Jonathan Mayer wrote: > > The working group has made great progress on the broad contours of > the definition document, and the conversation is shifting to specific > exceptions. With that in mind, now seems an appropriate time to > articulate my views on when and how exceptions should be granted. > > At a high level, we all agree that exceptions reflect a delicate > balance between consumer privacy interests and commercial value. > There are, no doubt, substantial differences in opinion about where > that balance should be struck. I hope here to clarify my approach > and help others understand why I find recent proposals for blanket > exceptions to be non-starters. > > In my view, any exception must satisfy this rigorous six-part test. > > 1) Specifically defined. An exception must clearly delineate what > data may be collected, retained, and used. If a proposed exception > is purely use-based, that needs to be extraordinarily explicit. > > 2) No special treatment. We should grant or deny an exception on the > merits of how it balances privacy and commerce, not a specific > business model. > > 3) Compelling business need. A bald assertion that without a > specific exception Do Not Track will "break the Internet" is not > nearly enough. I expect industry stakeholders to explain, with > specificity, what business purposes they need data for and why those > business purposes are extraordinarily valuable. > > 4) Significantly furthers the business need. I expect industry > participants to explain exactly how and to what extent a proposed > exception will further the compelling business needs they have > identified. In some cases cases, such as security and fraud > exceptions, this may call for technical briefing. > > 5) Strict minimization. If there is a privacy-preserving technology > that has equivalent or nearly equivalent functionality, it must be > used, and the exception must be no broader than that technology. The > burden is on industry to show that a privacy-preserving alternative > involves tradeoffs that fundamentally undermine its business needs. > In the context of frequency capping, for example, I need to hear why > - specifically - client-side storage approaches will not work. In > the context of market research, to take another example, I would need > to hear why statistical inference from non-DNT users would be > insufficient. > > 6) Balancing. There is a spectrum of possible exceptions for any > business need. At one end is a pure use-based exception that allows > for all collection and retention. At the other end is no exception > at all. In between there are infinite combinations of collection, > retention, and use limits, including exceptions scoped to > privacy-preserving but inferior technologies. In choosing among > these alternatives, I am guided by the magnitude of commercial need > and consumer privacy risk. I am only willing to accept an exception > where the commercial need substantially outweighs consumer privacy > interests. > > I understand example exceptions may be helpful in understanding my > thinking, so here are a few from the IETF Internet-Draft. > > 3. Data that is, with high confidence, not linkable to a > specific > user or user agent. This exception includes statistical > aggregates of protocol logs, such as pageview statistics, > so long > as the aggregator takes reasonable steps to ensure the > data does > not reveal information about individual users, user > agents, > devices, or log records. It also includes highly > non-unique data > stored in the user agent, such as cookies used for > advertising > frequency capping or sequencing. This exception does not > include > anonymized data, which recent work has shown to be often > re- > identifiable (see [Narayanan09] and [Narayanan08]). > 4. Protocol logs, not aggregated across first parties, and > subject > to a two week retention period. > 5. Protocol logs used solely for advertising fraud detection, > and > subject to a one month retention period. > 6. Protocol logs used solely for security purposes such as > intrusion > detection and forensics, and subject to a six month > retention > period. > 7. Protocol logs used solely for financial fraud detection, > and > subject to a six month retention period. > > > I would add, in closing, that in difficult cases I would err on the > side of not granting an exception. The exemption API is a policy > safety valve: If we are too stringent, a third party can ask for a > user's consent. If we are too lax, users are left with no recourse. > > Best, > Jonathan > > > > > -- Sean Harvey Business Product Manager Google, Inc. 212-381-5330 sharvey@google.com<mailto:sharvey@google.com>
Received on Thursday, 9 February 2012 21:24:16 UTC