- From: Jeffrey Yasskin <jyasskin@google.com>
- Date: Mon, 17 Feb 2020 08:00:00 -0800
- To: Maciej Stachowiak <mjs@apple.com>
- Cc: public-privacy <public-privacy@w3.org>
- Message-ID: <CANh-dXkYNwDmND-fEJ+1n7fwJdxtO-o+XMobQM3GzsHs5qFPCQ@mail.gmail.com>
Thank you! The draft at https://w3cping.github.io/privacy-threat-model/#model is very focused on linking user IDs, but https://github.com/w3cping/privacy-threat-model/pull/12 adds a section on "sensitive information" which covers some of your comments here. Your interpretation of intrusion ( https://tools.ietf.org/html/rfc6973#section-5.1.3) is interesting. https://github.com/w3cping/privacy-threat-model/pull/6 uses the RFC's suggestion of unsolicited messages as inspiration, so I'm curious if other folks think that's where remarketing belongs, or whether there's another good place to categorize it. I suspect we'll have to write down that there isn't consensus that simply seeing a remarketed ad is a privacy harm, but this document *is* a good place to call out disagreement like that. Jeffrey On Thu, Feb 13, 2020 at 6:36 PM Maciej Stachowiak <mjs@apple.com> wrote: > Hello all, > > A while back at a summit on browser privacy, I presented slides that, > among other things, explained how the WebKit and Safari teams at Apple > think about tracking threats on the web. In many ways, this is the threat > model implicit in WebKit’s Tracking Prevention Policy < > https://webkit.org/tracking-prevention-policy/>. > > This is very brief, because it’s converted from a slide in a presentation, > and I have not had much time to expand it. > > I’d like this to be considered as possible input for the Privacy Threat > Model that PING is working on < > https://w3cping.github.io/privacy-threat-model/>. > > Though these notes are very brief, they point to a more expansive way of > thinking about tracking threats. The current Privacy Threat Model draft > seems focused primarily on linking of user ID between different websites. > That’s the viewpoint also expressed in Chrome’s Privacy Sandbox effort, > which is also primarily focused on linking identity. > > Users may consider certain information to be private, even if it does not > constitute full linkage of identity. For example, if a site can learn about > personal characteristics, such as ethnicity, sexual orientation, or > political views, and the user did not choose to give that information to > that website, then that’s a privacy violation even if no linkage of > identity between two websites occurs. > > I’d be happy to discuss this more in whatever venue is congenial. For now > I just wanted to send this out, since I was asked to do so quite some time > ago. > > > Below is the text of the slide (and its speaker notes), followed by an > image of the slide itself. > ------------ > > == Threat Model == > > = Resources to be protected = > * Identity > * Browsing activity > * Personal characteristics > * Safety from intrusion > * Safety from manipulation > > = Potential Attackers = > * Who: ad-tech, data sellers, political operatives, browser vendors > * Capabilities: client-side state, fingrerprinting, collusion, identity > * Incentives: $, political influence > * Constraints: cost, shaming, regulatory action > > > Speaker Notes > * Intrusion: highly targeted ad based on personal characteristics, > recently viewed product, even if no real tracking > * Manipulation: Cambridge Analytica > * Who: we include ourselves; browsers shouldn’t track their users either > > > >
Attachments
- image/png attachment: PastedGraphic-1.png
Received on Monday, 17 February 2020 16:01:25 UTC