Re: Input on threat model from browser privacy summit slides

Hi Maciej Stachowiak and all contributors!
Thanks for all the work!
I'd like to share some comments here:

1. "Benign information disclosure..system preferences [like dark mode]"
Do we really care about that someone may know what theme we are using?
Or maybe it's just a common example here? If yes, then maybe another
meaningful example, say, "the preferred search engine"? If no, then I'd
like to ask why leaking theme will be a privacy issue.
The search engine will be an issue in some country like China, since
Google has been banned, so the preferred search engine can be used to
check if a person is doing anti-circurmvention work (cross the great
firewall), if the preferred option is Google.

2. "Misattribution"
My understand around "Misattribution" contains 2 parts:
- The intended content attack to an individual in a literal discussion;
- The source hijack technic in a data communication.
Well I don't think they're any related to privacy protection in our
context. Maybe "Misattribution" here means the misguiding activity/hint
to let users leak their privacy?

3. "Transfer user ID from publisher 1 to publisher 2 on navigation."
My understand is that it's the concern about revealing an unique ID to 3rd
party. If so, then we have to consider oauth2 situation.
Let's say P1 is the target server, and P2 is the 3rd party service. In
oauth2 or any similar protocol, P2 has to get an unique ID token from P1
after the user confirmed P2 is trusted by an explicit click
operation. But the ID token can be used reversely confirm the real
userid because they're uniquely 1:1 before it's revoked.
This model is widely used in the industry, and it's not an option to ban
it. However, the model is based on the "trusted 3rd party". So maybe we
should change the statement to "Transfer user ID from publisher 1 to
untrusted publisher 2 on navigation."

4. I suggest we use "untrusted publisher" in other related statements.
[OT: It's out of the topic to concern "what can be a trusted 3rd party". But
if anyone interested in this topic, maybe it's better to cowork with security
group together to discuss it.]

5. "Javascript can make requests to servers"
My suggestion is "only same-origin is recommended". But it may not
widely be accepted in the industry. To speak from an user perspective, I
would like to see the web site/service work even if I banned certain
origins in my client setting. So it means "only same-origin is required
to work, others origin can be optional". If any possible, I hope this
can be a suggestion to the web developers.
I do think it's a serious privacy issue to accept cross-origin
requests. Because even if we accept the origin site aquire our privacy
information, it doesn't mean this trust can be shared/propagated to
other origins.

6. "Servers can define the paths under which they host content"
I think the concern here is that the encoded URL may leak some sensitive
information. For example, if a person is watching a porn movie legally,
the URL + IP will reveal "who is watching what", this would be an
intrusion violation.

So far, I have all these in my brain.

Best regards.


Maciej Stachowiak writes:

> Hello all,
>
> A while back at a summit on browser privacy, I presented slides that, among other things, explained how the WebKit and Safari teams at Apple think about tracking threats on the web. In many ways, this is the threat model implicit in WebKit’s Tracking Prevention Policy <https://webkit.org/tracking-prevention-policy/ <https://webkit.org/tracking-prevention-policy/>>.
>
> This is very brief, because it’s converted from a slide in a presentation, and I have not had much time to expand it.
>
> I’d like this to be considered as possible input for the Privacy Threat Model that PING is working on <https://w3cping.github.io/privacy-threat-model/ <https://w3cping.github.io/privacy-threat-model/>>.
>
> Though these notes are very brief, they point to a more expansive way of thinking about tracking threats. The current Privacy Threat Model draft seems focused primarily on linking of user ID between different websites. That’s the viewpoint also expressed in Chrome’s Privacy Sandbox effort, which is also primarily focused on linking identity.
>
> Users may consider certain information to be private, even if it does not constitute full linkage of identity. For example, if a site can learn about personal characteristics, such as ethnicity, sexual orientation, or political views, and the user did not choose to give that information to that website, then that’s a privacy violation even if no linkage of identity between two websites occurs.
>
> I’d be happy to discuss this more in whatever venue is congenial. For now I just wanted to send this out, since I was asked to do so quite some time ago.
>
>
> Below is the text of the slide (and its speaker notes), followed by an image of the slide itself.
> ------------
>
> == Threat Model ==
>
> = Resources to be protected =
> * Identity
> * Browsing activity
> * Personal characteristics
> * Safety from intrusion
> * Safety from manipulation
>
> = Potential Attackers =
> * Who: ad-tech, data sellers, political operatives, browser vendors
> * Capabilities: client-side state, fingrerprinting, collusion, identity
> * Incentives: $, political influence
> * Constraints: cost, shaming, regulatory action
>
>
> Speaker Notes
> * Intrusion: highly targeted ad based on personal characteristics, recently viewed product, even if no real tracking
> * Manipulation: Cambridge Analytica
> * Who: we include ourselves; browsers shouldn’t track their users either


--
GNU Powered it
GPL Protected it
GOD Blessed it
HFG - NalaGinrut
Fingerprint F53B 4C56 95B5 E4D5 6093 4324 8469 6772 846A 0058

Received on Thursday, 20 February 2020 13:53:17 UTC