Re: Input on threat model from browser privacy summit slides

Hi -

I think people shouldn't have their data exposed (or used) against their
will - but I also think they shouldn't expect privacy when they take
actions to expose the information.  For instance, in a job interview, I
don't think the employer should be able to ask what political party the
candidate supports.  If the candidate shows up to the interview wearing a
"Candidate X for re-election!" t-shirt, though, then I don't think it's
unreasonable for the employer to then make some assumptions about the
candidate's political leanings, either.

I think one of the things people love about the web is that it lets them be
whomever they want to be without it having to be tied to who they are in
real life.  There's a lot of value in that, but it's also led to a lot of
abusive behavior online.  Personally, I think that should be adjusted, so
people do feel responsible for their own actions online.

In terms of data privacy and advertising, I think there are both positive
and negative use cases from a consumer point of view.  Bottom line, none of
this is simple - or technically easy to pull off.  I am concerned about the
threat model representing the idea that a company tracking users is always
a bad thing, though - or that data privacy is always good.  For example,
"safety from manipulation": what exactly does that mean?  Simply providing
more information can be seen as manipulative.  Or what about the
differences between browsing at work vs browsing during your personal
time?  I think those are nearly two different threat models there because I
don't think employees should expect that they have the same level of
privacy at work that they do on their own time.

Anyway, I don't have great answers here myself - but just wanted to
advocate for a somewhat cautious approach.

Thanks!
- Kris





On Mon, Feb 17, 2020 at 2:02 PM Maciej Stachowiak <mjs@apple.com> wrote:

>
>
> On Feb 17, 2020, at 8:00 AM, Jeffrey Yasskin <jyasskin@google.com> wrote:
>
> Thank you!
>
> The draft at https://w3cping.github.io/privacy-threat-model/#model is
> very focused on linking user IDs, but
> https://github.com/w3cping/privacy-threat-model/pull/12 adds a section on
> "sensitive information" which covers some of your comments here.
>
>
> Thanks, I’ll review this and file issues on anything not covered by that
> PR.
>
>
> Your interpretation of intrusion (
> https://tools.ietf.org/html/rfc6973#section-5.1.3) is interesting.
> https://github.com/w3cping/privacy-threat-model/pull/6 uses the RFC's
> suggestion of unsolicited messages as inspiration, so I'm curious if other
> folks think that's where remarketing belongs, or whether there's another
> good place to categorize it. I suspect we'll have to write down that there
> isn't consensus that simply seeing a remarketed ad is a privacy harm, but
> this document *is* a good place to call out disagreement like that.
>
>
> The most canonical case of what I mean by “intrusion” is seeing ads
> obviously highly targeted to a personal characteristic. For example, if I
> very frequently saw ads referencing my ethnicity, sexual orientation or
> political views, I would feel very uncomfortable, even if I was assured
> that the ad selection was done in a theoretically privacy-preserving way.
>
> I think retargeted ads also may fall into this bucket, but mainly if
> excessive.
>
> The common thread is that these kinds of experiences make the user feel
> like someone is intruding on their sense of privacy, whether or not that is
> true on some technical sense. I think that is actually pretty similar to
> the IETF definition of “intrusion” that you linked, even though it
>
> I think it’s ok to start with an indication that there’s no consensus on
> whether certain notions of privacy are part of the Privacy Threat Model of
> the W3C. However, I think this group will ultimately have to make a call.
> It doesn’t seem right to automatically exclude any protections that don’t
> have 100% agreement. That’s now how W3C consensus is supposed to work. So
> any time the document indicates lack of consensus, there should be an issue
> filed (probably with an ISSUE marker in the text) to ultimately be resolved
> by the group.
>
> Regards,
> Maciej
>
>
> Jeffrey
>
> On Thu, Feb 13, 2020 at 6:36 PM Maciej Stachowiak <mjs@apple.com> wrote:
>
>> Hello all,
>>
>> A while back at a summit on browser privacy, I presented slides that,
>> among other things, explained how the WebKit and Safari teams at Apple
>> think about tracking threats on the web. In many ways, this is the threat
>> model implicit in WebKit’s Tracking Prevention Policy <
>> https://webkit.org/tracking-prevention-policy/>.
>>
>> This is very brief, because it’s converted from a slide in a
>> presentation, and I have not had much time to expand it.
>>
>> I’d like this to be considered as possible input for the Privacy Threat
>> Model that PING is working on <
>> https://w3cping.github.io/privacy-threat-model/>.
>>
>> Though these notes are very brief, they point to a more expansive way of
>> thinking about tracking threats. The current Privacy Threat Model draft
>> seems focused primarily on linking of user ID between different websites..
>> That’s the viewpoint also expressed in Chrome’s Privacy Sandbox effort,
>> which is also primarily focused on linking identity.
>>
>> Users may consider certain information to be private, even if it does not
>> constitute full linkage of identity. For example, if a site can learn about
>> personal characteristics, such as ethnicity, sexual orientation, or
>> political views, and the user did not choose to give that information to
>> that website, then that’s a privacy violation even if no linkage of
>> identity between two websites occurs.
>>
>> I’d be happy to discuss this more in whatever venue is congenial.. For now
>> I just wanted to send this out, since I was asked to do so quite some time
>> ago.
>>
>>
>> Below is the text of the slide (and its speaker notes), followed by an
>> image of the slide itself.
>> ------------
>>
>> == Threat Model ==
>>
>> = Resources to be protected =
>> * Identity
>> * Browsing activity
>> * Personal characteristics
>> * Safety from intrusion
>> * Safety from manipulation
>>
>> = Potential Attackers =
>> * Who: ad-tech, data sellers, political operatives, browser vendors
>> * Capabilities: client-side state, fingrerprinting, collusion, identity
>> * Incentives: $, political influence
>> * Constraints: cost, shaming, regulatory action
>>
>>
>> Speaker Notes
>> * Intrusion: highly targeted ad based on personal characteristics,
>> recently viewed product, even if no real tracking
>> * Manipulation: Cambridge Analytica
>> * Who: we include ourselves; browsers shouldn’t track their users either
>>
>>
>>
>> <PastedGraphic-1.png>
>>
>
>

Received on Tuesday, 18 February 2020 14:17:24 UTC