Re: DRAFT TAG feedback for fingerprinting

On 05/23/2015 12:45 AM, Mark Nottingham wrote:
>> On 23 May 2015, at 2:16 pm, L. David Baron <> wrote:
>> On Friday 2015-05-22 14:41 +1000, Mark Nottingham wrote:
>>> … based on our discussion this week is here:
>>> Feedback / issues / pulls appreciated. Nick, CC:ing FYI, but realise that this isn't final yet.
>> I'd like to see the opening make a stronger argument than falling
>> back on "reasonably strong consensus in the industry".  Perhaps,
>> though, that's feedback as to what the fingerprinting guidance
>> document could say rather than what the TAG feedback on it could
>> say.
> Yep. I think we actually have a fair amount to work to do there; am going to start writing up a proposal for a Finding.
>> It's a little unclear to me exactly *what* is believed to be a lost
>> cause.  For example, is it:
>> * fingerprinting in today's browsers for a typical user, or
>>   fingerprinting of a browser designed to mitigate fingerprinting
>>   (and, say, over TOR) and attempting to keep up with mitigating
>>   current fingerprinting techniques?  (Or fingerprinting in 2010's
>>   browsers, which is different given that a number of the sources
>>   of entropy in have
>>   been significantly reduced since then.)
>> * putting users in small-ish buckets (e.g., laptop model + OS
>>   version + browser version) or identifying users down to the
>>   individual?
>> If there are reasonably current data to cite that make the argument
>> that fingerprinting is a lost cause, I think that would be far
>> better than citing consensus.
>> Citing data also allows people who are interested in working on the
>> problem to compare their possible solutions to sources of entropy to
>> the magnitude of the problem.  (Some of the data I've seen seemed
>> somewhat unconvincing because I thought a significant portion of the
>> entropy could be avoided.)
> All very good points. I don't want to rely on consensus in the Finding - just trying to reflect the TAG position for purposes of feedback.

The document usefully differentiates between active and passive attacks.
Is it helpful to frame out some of the different defense models further?

- defenses may be global: reducing fingerprinting surface; increasing
the fingerprinter's cost, to make passive global fingerprinting
infeasible even if active or targeted fingerprinting is still possible;
- or specific: a given user may be willing to trade off many performance
and display improvements in order not to be individually identified.

I wonder if we can offer more help to the specific defense:
A user deliberately seeking out privacy, such as a Tor Browser user, has
already decided to trade performance for anonymity. Even if the
resulting profile will stand out as "anonymous," it should at least be
part of an indistinguishable pool of anonymous users, or un-linkable to
the user's other activities. Can we give that user (or that user's
browser authors) the tools to block reporting of client-side features,
timing differentiation, and other entropy sources?


Wendy Seltzer -- +1.617.715.4883 (office)
Policy Counsel and Domain Lead, World Wide Web Consortium (W3C)        +1.617.863.0613 (mobile)

Received on Wednesday, 27 May 2015 17:15:33 UTC