- From: Maciej Stachowiak <mjs@apple.com>
- Date: Tue, 18 Feb 2020 18:47:09 -0800
- To: Jeffrey Yasskin <jyasskin@google.com>
- Cc: public-privacy <public-privacy@w3.org>
- Message-id: <5860FEB9-45D4-41AF-9761-0D2E53B7ECC0@apple.com>
> On Feb 18, 2020, at 2:52 PM, Jeffrey Yasskin <jyasskin@google.com> wrote: > > On Mon, Feb 17, 2020 at 10:59 AM Maciej Stachowiak <mjs@apple.com <mailto:mjs@apple.com>> wrote: > > >> On Feb 17, 2020, at 8:00 AM, Jeffrey Yasskin <jyasskin@google.com <mailto:jyasskin@google.com>> wrote: >> >> Thank you! >> >> The draft at https://w3cping.github.io/privacy-threat-model/#model <https://w3cping.github.io/privacy-threat-model/#model> is very focused on linking user IDs, but https://github.com/w3cping/privacy-threat-model/pull/12 <https://github.com/w3cping/privacy-threat-model/pull/12> adds a section on "sensitive information" which covers some of your comments here. > > Thanks, I’ll review this and file issues on anything not covered by that PR. > >> >> Your interpretation of intrusion (https://tools.ietf.org/html/rfc6973#section-5.1.3 <https://tools.ietf.org/html/rfc6973#section-5.1.3>) is interesting. https://github.com/w3cping/privacy-threat-model/pull/6 <https://github.com/w3cping/privacy-threat-model/pull/6> uses the RFC's suggestion of unsolicited messages as inspiration, so I'm curious if other folks think that's where remarketing belongs, or whether there's another good place to categorize it. I suspect we'll have to write down that there isn't consensus that simply seeing a remarketed ad is a privacy harm, but this document *is* a good place to call out disagreement like that. > > The most canonical case of what I mean by “intrusion” is seeing ads obviously highly targeted to a personal characteristic. For example, if I very frequently saw ads referencing my ethnicity, sexual orientation or political views, I would feel very uncomfortable, even if I was assured that the ad selection was done in a theoretically privacy-preserving way. > > I'm very comfortable with some rule excluding those, although my internal logic runs through the likelihood of them being used for redlining <https://en.wikipedia.org/wiki/Redlining>. That is, they can be bad even if you're confident none of your data left your device. We should (eventually) think through what "highly targeted" means, maybe based on what level of correlation is acceptable between a browser-provided group and a protected category. Redlining is definitely a relevant harm. In our view the sense of discomfort such an ad can produce is also a harm in itself. I’ll draw a very rough analogy. Imagine a city was plastered with posters that said "Big Brother is Watching”, with highly visible cameras. But if anyone inquires, city government officials assure them that the cameras aren’t really hooked up, and even if they were, no one would be watching them. I would see such an environment as harmful even if the officials are completely truthful. (This isn’t exactly on point, because it doesn’t involve the element of protected characteristics, but hopefully it clarifies our thinking.) > I'm realizing that none of the threats in https://tools.ietf.org/html/rfc6973#section-5 <https://tools.ietf.org/html/rfc6973#section-5> fits the redlining worry very well, or your Cambridge Analytica point. Should we add a "Manipulation" threat, or am I missing an existing one it fits under? Manipulation was listed as a distinct threat in my original post on this thread. > > I think retargeted ads also may fall into this bucket, but mainly if excessive. > > With some worries about how to let browser APIs act on "excessive", I agree. > > The common thread is that these kinds of experiences make the user feel like someone is intruding on their sense of privacy, whether or not that is true on some technical sense. I think that is actually pretty similar to the IETF definition of “intrusion” that you linked, even though it > > I think it’s ok to start with an indication that there’s no consensus on whether certain notions of privacy are part of the Privacy Threat Model of the W3C. However, I think this group will ultimately have to make a call. It doesn’t seem right to automatically exclude any protections that don’t have 100% agreement. That’s now how W3C consensus is supposed to work. So any time the document indicates lack of consensus, there should be an issue filed (probably with an ISSUE marker in the text) to ultimately be resolved by the group. > > Mhmm. I want to be able to use this document to remind my teammates of things we've agreed to (which is, admittedly, not really the purpose of a W3C document), and I don't want to get bogged down discussing controversial things if we can nail down some less controversial points first, but that's all compatible with filing issues and marking them in the text as things to work through later. It would be convenient for all browser vendors if the W3C’s privacy threat model was identical to their own, or at least something they were fully committed to. But, indeed, this is not the purpose of a W3C document. In any case, it seems like we agree on a suitable short-term approach. Regards, Maciej
Received on Wednesday, 19 February 2020 02:47:29 UTC