Re: Input on threat model from browser privacy summit slides

Thanks for sharing your thoughts, some replies inline.

> On Feb 18, 2020, at 6:14 AM, Kris Chapman <kristen.chapman@salesforce.com> wrote:
> 
> Hi -
> 
> I think people shouldn't have their data exposed (or used) against their will - but I also think they shouldn't expect privacy when they take actions to expose the information.  For instance, in a job interview, I don't think the employer should be able to ask what political party the candidate supports.  If the candidate shows up to the interview wearing a "Candidate X for re-election!" t-shirt, though, then I don't think it's unreasonable for the employer to then make some assumptions about the candidate's political leanings, either.  

There’s likely no reasonable expectation to keep information private from a party you have told directly. But let’s think about some variations of this scenario.

(1) You go to an interview with Employer A wearing your “Candidate X for re-election!” t-shirt. Then you go to an interview with Employer B wearing a plain t-shirt. Employer B says, “so how about that Candidate X?” Feels kinda weird. Did the employers gossip about you with each other?

(2) You support Candidate X, but don’t want to wear a blatant t-shirt to your interview with Employer A. Instead, you wear a “Legalize spice melange!” t-shirt. But Employer A knows that people who are pro-spice are likely to support Candidate X, especially people of your demographic. Employer A asks, “so how about that Candidate X?” Again, feels kinda weird.

(3) A combo of the above two; you wear your pro-spice T-shirt to an interview with Employer A, but it’s Employer B who asks about Candidate X. Feels extra weird.

A lot of targeted advertising scenarios can resemble (1), (2) and (3), not just you original scenario. I think those are not ok, either in the real world, or in the analogs online.

> I think one of the things people love about the web is that it lets them be whomever they want to be without it having to be tied to who they are in real life.  There's a lot of value in that, but it's also led to a lot of abusive behavior online.  Personally, I think that should be adjusted, so people do feel responsible for their own actions online.

It would be great if we had a solution to Responsibility online. But this is PING, not RING, so not the right place to try to solve the problem. In any case, I’m not aware of tracking technologies being used to curtail abusive behavior, except in the most trivial sense of detecting bots or mass fraud.

> In terms of data privacy and advertising, I think there are both positive and negative use cases from a consumer point of view.  Bottom line, none of this is simple - or technically easy to pull off.  I am concerned about the threat model representing the idea that a company tracking users is always a bad thing, though - or that data privacy is always good. 

A threat model can have exceptions for access considered legitimate, but they need to be specific and principled.

> For example, "safety from manipulation": what exactly does that mean?  Simply providing more information can be seen as manipulative. 

The specific example I pointed to was Cambridge Analytica, where they are known to have used political micro-targeting to influence elections without general public scrutiny of the message. I think many people agree this is messed up.

> Or what about the differences between browsing at work vs browsing during your personal time?  I think those are nearly two different threat models there because I don't think employees should expect that they have the same level of privacy at work that they do on their own time.

I don’t personally support employers monitoring their employee’s web browsing. But it’s legal in many jurisdictions and many consider it legitimate. However, employers generally do not monitor employee web browsing by using cross-site tracking technologies. Rather, they install filtering/monitoring firewalls, perhaps even TLS middleboxes; or they install local spyware. Technologies like that are probably outside the scope of the privacy threat model.

> 
> Anyway, I don't have great answers here myself - but just wanted to advocate for a somewhat cautious approach.
> 
> Thanks!
> - Kris
>  
> 
> 
> 
> 
> On Mon, Feb 17, 2020 at 2:02 PM Maciej Stachowiak <mjs@apple.com <mailto:mjs@apple.com>> wrote:
> 
> 
>> On Feb 17, 2020, at 8:00 AM, Jeffrey Yasskin <jyasskin@google.com <mailto:jyasskin@google.com>> wrote:
>> 
>> Thank you!
>> 
>> The draft at https://w3cping.github.io/privacy-threat-model/#model <https://w3cping.github.io/privacy-threat-model/#model> is very focused on linking user IDs, but https://github.com/w3cping/privacy-threat-model/pull/12 <https://github.com/w3cping/privacy-threat-model/pull/12> adds a section on "sensitive information" which covers some of your comments here.
> 
> Thanks, I’ll review this and file issues on anything not covered by that PR.
> 
>> 
>> Your interpretation of intrusion (https://tools.ietf.org/html/rfc6973#section-5.1.3 <https://tools.ietf.org/html/rfc6973#section-5.1.3>) is interesting. https://github.com/w3cping/privacy-threat-model/pull/6 <https://github.com/w3cping/privacy-threat-model/pull/6> uses the RFC's suggestion of unsolicited messages as inspiration, so I'm curious if other folks think that's where remarketing belongs, or whether there's another good place to categorize it. I suspect we'll have to write down that there isn't consensus that simply seeing a remarketed ad is a privacy harm, but this document *is* a good place to call out disagreement like that.
> 
> The most canonical case of what I mean by “intrusion” is seeing ads obviously highly targeted to a personal characteristic. For example, if I very frequently saw ads referencing my ethnicity, sexual orientation or political views, I would feel very uncomfortable, even if I was assured that the ad selection was done in a theoretically privacy-preserving way.
> 
> I think retargeted ads also may fall into this bucket, but mainly if excessive.
> 
> The common thread is that these kinds of experiences make the user feel like someone is intruding on their sense of privacy, whether or not that is true on some technical sense. I think that is actually pretty similar to the IETF definition of “intrusion” that you linked, even though it 
> 
> I think it’s ok to start with an indication that there’s no consensus on whether certain notions of privacy are part of the Privacy Threat Model of the W3C. However, I think this group will ultimately have to make a call. It doesn’t seem right to automatically exclude any protections that don’t have 100% agreement. That’s now how W3C consensus is supposed to work. So any time the document indicates lack of consensus, there should be an issue filed (probably with an ISSUE marker in the text) to ultimately be resolved by the group.
> 
> Regards,
> Maciej
> 
>> 
>> Jeffrey
>> 
>> On Thu, Feb 13, 2020 at 6:36 PM Maciej Stachowiak <mjs@apple.com <mailto:mjs@apple.com>> wrote:
>> Hello all,
>> 
>> A while back at a summit on browser privacy, I presented slides that, among other things, explained how the WebKit and Safari teams at Apple think about tracking threats on the web. In many ways, this is the threat model implicit in WebKit’s Tracking Prevention Policy <https://webkit.org/tracking-prevention-policy/ <https://webkit.org/tracking-prevention-policy/>>.
>> 
>> This is very brief, because it’s converted from a slide in a presentation, and I have not had much time to expand it.
>> 
>> I’d like this to be considered as possible input for the Privacy Threat Model that PING is working on <https://w3cping.github.io/privacy-threat-model/ <https://w3cping.github.io/privacy-threat-model/>>.
>> 
>> Though these notes are very brief, they point to a more expansive way of thinking about tracking threats. The current Privacy Threat Model draft seems focused primarily on linking of user ID between different websites. That’s the viewpoint also expressed in Chrome’s Privacy Sandbox effort, which is also primarily focused on linking identity.
>> 
>> Users may consider certain information to be private, even if it does not constitute full linkage of identity. For example, if a site can learn about personal characteristics, such as ethnicity, sexual orientation, or political views, and the user did not choose to give that information to that website, then that’s a privacy violation even if no linkage of identity between two websites occurs. 
>> 
>> I’d be happy to discuss this more in whatever venue is congenial. For now I just wanted to send this out, since I was asked to do so quite some time ago.
>> 
>> 
>> Below is the text of the slide (and its speaker notes), followed by an image of the slide itself.
>> ------------
>> 
>> == Threat Model ==
>> 
>> = Resources to be protected =
>> * Identity
>> * Browsing activity
>> * Personal characteristics
>> * Safety from intrusion
>> * Safety from manipulation
>> 
>> = Potential Attackers =
>> * Who: ad-tech, data sellers, political operatives, browser vendors
>> * Capabilities: client-side state, fingrerprinting, collusion, identity
>> * Incentives: $, political influence
>> * Constraints: cost, shaming, regulatory action
>> 
>> 
>> Speaker Notes
>> * Intrusion: highly targeted ad based on personal characteristics, recently viewed product, even if no real tracking
>> * Manipulation: Cambridge Analytica
>> * Who: we include ourselves; browsers shouldn’t track their users either
>> 
>> 
>> 
>> <PastedGraphic-1.png>
> 

Received on Tuesday, 18 February 2020 16:23:02 UTC