- From: Jonathan Mayer <jmayer@stanford.edu>
- Date: Wed, 14 Mar 2012 14:13:15 -0700
- To: Shane Wiley <wileys@yahoo-inc.com>
- Cc: "Roy T. Fielding" <fielding@gbiv.com>, Tracking Protection Working Group WG <public-tracking@w3.org>
- Message-Id: <A31FC9C4-6A58-4A2B-84D1-6318F702EDCC@stanford.edu>
We're in complete agreement on aim: Do Not Track should not make attempts to defraud and hack third parties more likely to succeed. If Do Not Track does help, it will be abused. We part ways on the practical impact of Do Not Track. The relevant frame of thinking is on the margin: 1) Are attackers better off relative to the status quo? 2) Are defenders worse off relative to the status quo? I do not see how Do Not Track would facilitate attacks. Malicious users trivially can (and do) block, clear, scramble, and swap cookies. They can use the current opt-out cookies, which prevent an ID from being set with about half of self-regulatory participants. They can also change user-agent, redirect traffic, and herd bots. In short: there are myriad ways for malicious users to evade detection and appear as ordinary users. Do Not Track is a negligible threat vector given what's already possible. I also do not see how a proportionate response would hinder defenders. Non-cookie tracking technologies, including supercookies and fingerprinting, are undoubtedly effective tools for identifying and stopping malicious users. But just about no third-party service (save finger printers) deploys non-cookie tracking technologies for *every* browser. The NAI, in fact, specifically rejects the practice: > 19. What is the NAI's policy on "Flash cookies" and similar technologies? > NAI members have confirmed that they are not using Flash cookies for online behavioral advertising (OBA). The NAI in 2010 took the position that its members should not use locally-shared objects (LSOs)* like Flash cookies for OBA, Ad Delivery & Reporting, and/or Multi-Site Advertising, until such time as web browser tools allow for the same level of transparency and control as is available today for standard HTTP cookies. > > In addition to LSOs, there are other alternatives to standard HTTP cookies that may enable data collection and use for OBA, Multi-site Advertising, and Ad Delivery & Reporting purposes. The NAI generally believes that any technology used for such purposes should afford users an appropriate degree of transparency and control. This policy is consistent with the NAI's goal of providing users with insight into the specific technologies used to collect information for the purposes covered by the NAI code. This approach also affords flexibility in the future to evaluate innovative technologies not yet generally in use in the online advertising marketplace. > > The web cache is one example of a browser-based technology that can be used to store persistent information. As with LSOs, the NAI takes the position that the web browser cache does not currently afford users an appropriate degree of transparency and control, and that such browser-based storage technologies should not be used by NAI members for OBA, Multi-site Advertising, or Ad Delivery & Reporting purposes until such time as these technologies allow for the same level of transparency and control as is available today for standard HTTP cookies. > > *LSOs are technologies that allow for the persistent storage and retrieval of information in relationship to a user's web browsing experience, but that are typically not exposed via native browser user controls (such as those presently available for HTTP cookies). Examples include, but are not limited to, IE Browser Helper Objects (BHOs), Adobe Flash objects, and Microsoft Silverlight objects. Under the NAI's policy, LSOs may continue to be used for settings management purposes (such as user preferences and age verification). http://networkadvertising.org/managing/faqs.asp#question_19 As for information sharing about known attacks and threat signatures, I don't believe this proposal would impose any limit. On Mar 14, 2012, at 11:39 AM, Shane Wiley wrote: > After spending more time with our fraud detection and defense teams, I agree with Roy's stance here. > > Some shared observations: > > - Bad Actors don't always block cookies as their goal is to "blend in" to look like a normal user. Often though, they scramble their cookie data or shift identifiers so cookies cannot be used against them. > - If Bad Actors feel that DNT:1 will give additional protections from being discovered, they will turn it on in mass. > - Discovering Bad Actors is difficult, and is getting more difficult as the arms race continues to evolve. > - Process, process, process - while I'm not at liberty to go into too much detail, Roy's perspective that data collection must occur prior to discovery is absolutely correct. There are 3 types of filtering activities that occur with Bad Actors: > -- Immediate Filtering: based on previous discovery of direct IPs or other digital fingerprints (yes, digital fingerprints are used to stop bad actors - bad actors are too sophisticated to be stopped by cookie approaches) > -- Pattern Matching: based on the particular attack vector, data is collected to identify if a user agent is performing activities aligned with a suspicious pattern - patterns of malicious activity vary across time bounds (this is what I believe you're referring to Jonathan) > -- Discovery: it takes significant time to discover new attack vectors - sometimes weeks and months before a detectable pattern will emerge as the sophistication of attacks grow > > - There are also movements in industry to begin sharing some of this information within industry coalitions and with law enforcement (although our legal system isn't well prepared to take this on at this time - anywhere on the globe) for the greater good - and do this in a manner that isn't easily discovered by Bad Actors. > > Please understand these activities are to PROTECT users and businesses alike (depends on the attack). I'm hopeful we don't purposely create real risk of harm to users in our attempts to "lock down" the DNT standard. > > - Shane > > -----Original Message----- > From: Jonathan Mayer [mailto:jmayer@stanford.edu] > Sent: Wednesday, March 14, 2012 9:19 AM > To: Roy T. Fielding > Cc: Tracking Protection Working Group WG > Subject: Re: Proportionate Response for Fraud Prevention and Security (ISSUE-24) > > > On Mar 14, 2012, at 3:22 AM, Roy T. Fielding wrote: > >> On Mar 14, 2012, at 1:09 AM, Jonathan Mayer wrote: >> >>> I think there are two lines of thinking to unpack here. >>> >>> 1) What does Do Not Track do to third-party services that provide fraud prevention and security functionality for first-party websites (e.g. 41st Parameter, BlueCava)? >>> >>> My proposal isn't about those services. At present they get the same treatment as any other third party; in many cases they'll qualify for the outsourcing exception. If a stakeholder wants to propose a special exception for those services, we could discuss it. >> >> They would qualify for the exemption for fraud prevention, assuming >> the data use and retention is so limited. > > That's a question of how we scope the exception. It's easy to add clarifying language that that's not what this text is about. > >> I did not consider them >> under the outsourcing exemption because siloing their data per >> first-party would be unusual. > > My understanding is practices substantially vary. It would be helpful to hear from companies in the space. > >>> 2) How can Do Not Track accommodate fraud prevention and security for pure third-party services without being overly prescriptive? >>> >>> My proposed text gives quite broad latitude to third party websites. What is "reasonable" will vary by industry and company. I agree that guiding examples would be valuable. >> >> No, my point was that DNT cannot have an impact on fraud control, period. >> If it did, then the presence of DNT:1 would be indicative of >> "reasonable grounds to believe the user or user agent is presently >> attempting to commit fraud". > > The chance that a randomly chosen browser with DNT: 1 is committing fraud, independent of other evidence, will be very low. That's not "reasonable grounds." > >> In any case, the nature of data collection for fraud prevention is >> to collect a lot of data on non-fraudulent behavior, so that when >> an anomalous set of behavior is encountered the "alarm" is triggered. > > Many websites (not just third parties) already use a proportionate response approach to fraud prevention and security. I'm not aware of any third-party service (save fingerprinters) that presently uses a non-cookie active tracking technology for every user that has disabled cookies. > >> The premise on which your two operative texts are based is that the >> fraud control engine can determine "reasonable grounds" without >> first collecting or using data about the user agent. It does not >> work that way (and I am not just talking about advertising fraud). > > The text is premised on the notion that protocol logs and interaction information are sufficient to establish "reasonable grounds." > >> I am not saying this isn't a privacy concern. I am saying it won't >> be addressed by DNT because lessening fraud control on the basis of >> a signal received from the client is simply not a viable option. >> I would encourage the regulators to find a solution to this concern >> outside of DNT, since it has nothing to do with preferences/consent. > > If there's a blanket security or fraud exception, then the privacy properties of DNT are mooted. I don't believe the privacy advocates in the group would ever accept such an exception. Let's talk with potential implementers about their needs before concluding we're between a rock and a hard place. > >> ....Roy >> >> > > >
Received on Wednesday, 14 March 2012 21:13:47 UTC