Re: Input on threat model from browser privacy summit slides

On Mon, Feb 17, 2020 at 10:59 AM Maciej Stachowiak <> wrote:

> On Feb 17, 2020, at 8:00 AM, Jeffrey Yasskin <> wrote:
> Thank you!
> The draft at is
> very focused on linking user IDs, but
> adds a section on
> "sensitive information" which covers some of your comments here.
> Thanks, I’ll review this and file issues on anything not covered by that
> PR.
> Your interpretation of intrusion (
> is interesting.
> uses the RFC's
> suggestion of unsolicited messages as inspiration, so I'm curious if other
> folks think that's where remarketing belongs, or whether there's another
> good place to categorize it. I suspect we'll have to write down that there
> isn't consensus that simply seeing a remarketed ad is a privacy harm, but
> this document *is* a good place to call out disagreement like that.
> The most canonical case of what I mean by “intrusion” is seeing ads
> obviously highly targeted to a personal characteristic. For example, if I
> very frequently saw ads referencing my ethnicity, sexual orientation or
> political views, I would feel very uncomfortable, even if I was assured
> that the ad selection was done in a theoretically privacy-preserving way.

I'm very comfortable with some rule excluding those, although my internal
logic runs through the likelihood of them being used for redlining
<>. That is, they can be bad even if
you're confident none of your data left your device. We should (eventually)
think through what "highly targeted" means, maybe based on what level of
correlation is acceptable between a browser-provided group and a protected

I'm realizing that none of the threats in fits the redlining worry very
well, or your Cambridge Analytica point. Should we add a "Manipulation"
threat, or am I missing an existing one it fits under?

I think retargeted ads also may fall into this bucket, but mainly if
> excessive.

With some worries about how to let browser APIs act on "excessive", I agree.

The common thread is that these kinds of experiences make the user feel
> like someone is intruding on their sense of privacy, whether or not that is
> true on some technical sense. I think that is actually pretty similar to
> the IETF definition of “intrusion” that you linked, even though it
> I think it’s ok to start with an indication that there’s no consensus on
> whether certain notions of privacy are part of the Privacy Threat Model of
> the W3C. However, I think this group will ultimately have to make a call.
> It doesn’t seem right to automatically exclude any protections that don’t
> have 100% agreement. That’s now how W3C consensus is supposed to work. So
> any time the document indicates lack of consensus, there should be an issue
> filed (probably with an ISSUE marker in the text) to ultimately be resolved
> by the group.

Mhmm. I want to be able to use this document to remind my teammates of
things we've agreed to (which is, admittedly, not really the purpose of a
W3C document), and I don't want to get bogged down discussing controversial
things if we can nail down some less controversial points first, but that's
all compatible with filing issues and marking them in the text as things to
work through later.


> Maciej
> Jeffrey
> On Thu, Feb 13, 2020 at 6:36 PM Maciej Stachowiak <> wrote:
>> Hello all,
>> A while back at a summit on browser privacy, I presented slides that,
>> among other things, explained how the WebKit and Safari teams at Apple
>> think about tracking threats on the web. In many ways, this is the threat
>> model implicit in WebKit’s Tracking Prevention Policy <
>> This is very brief, because it’s converted from a slide in a
>> presentation, and I have not had much time to expand it.
>> I’d like this to be considered as possible input for the Privacy Threat
>> Model that PING is working on <
>> Though these notes are very brief, they point to a more expansive way of
>> thinking about tracking threats. The current Privacy Threat Model draft
>> seems focused primarily on linking of user ID between different websites.
>> That’s the viewpoint also expressed in Chrome’s Privacy Sandbox effort,
>> which is also primarily focused on linking identity.
>> Users may consider certain information to be private, even if it does not
>> constitute full linkage of identity. For example, if a site can learn about
>> personal characteristics, such as ethnicity, sexual orientation, or
>> political views, and the user did not choose to give that information to
>> that website, then that’s a privacy violation even if no linkage of
>> identity between two websites occurs.
>> I’d be happy to discuss this more in whatever venue is congenial. For now
>> I just wanted to send this out, since I was asked to do so quite some time
>> ago.
>> Below is the text of the slide (and its speaker notes), followed by an
>> image of the slide itself.
>> ------------
>> == Threat Model ==
>> = Resources to be protected =
>> * Identity
>> * Browsing activity
>> * Personal characteristics
>> * Safety from intrusion
>> * Safety from manipulation
>> = Potential Attackers =
>> * Who: ad-tech, data sellers, political operatives, browser vendors
>> * Capabilities: client-side state, fingrerprinting, collusion, identity
>> * Incentives: $, political influence
>> * Constraints: cost, shaming, regulatory action
>> Speaker Notes
>> * Intrusion: highly targeted ad based on personal characteristics,
>> recently viewed product, even if no real tracking
>> * Manipulation: Cambridge Analytica
>> * Who: we include ourselves; browsers shouldn’t track their users either
>> <PastedGraphic-1.png>

Received on Tuesday, 18 February 2020 22:53:18 UTC