- From: Robin Berjon via GitHub <sysbot+gh@w3.org>
- Date: Mon, 07 Mar 2022 21:31:10 +0000
- To: public-patcg@w3.org
@lbdvt The thing we need to be very careful about is to not assume that how we want the system to be used is how it actually gets used. If we could ensure that your visit to Nice Furniture™ could _only_ be used to show you a furniture ad for a comparatively short period of time thereafter, we'd be in a very different position compared to the one we're in now. If a study emerges in a few years showing that students whose parents are into nice furniture do less well in college, this data could be used by some universities to turn your kids down in the future. Evidently, this is a deliberately contrived example and it wouldn't be possible in all countries but the key is that privacy harms are impossible for people to predict and time shifted. This has already had real consequences, for instance with tracking data used to hunt down undocumented people ([WSJ](https://perma.cc/PX7L-ET8C), [Vice](https://perma.cc/72QS-Q5AH)). Lack of purpose limitation is a constant source of problems in data. There's a strong sense in which guaranteeing purpose limitations is a key objective of this group. @drpaulfarrow I don't have a definitive text on the history, but my understanding from reading around isn't that one person one day decided to apply ideas from HSR to computer systems but rather that it happened because those were the conceptual tools "lying around" at the time. For instance, _Records, Computers, and the Rights of Citizens_ already mentions data as research and the term "data subject" which became the norm. These assumptions are also in Convention 108, the 1995 Data Directive, and all the EU texts that follow. I've been meaning to look at the Conseil d'État's 1970 report on this to see what's there. If you're interested, you might be able to dig into the Hessischer Landtag's _Vorlage des Datenschutzbeauftragten_ that was influential at the time. There is a related thread that concerns more general permissions for a website to access more powerful capabilities (including data) and that has been a recurring unsolved issue in the W3C and broader Web community. It reads like a list of failures trying to rely on consent when risk is involved: ActiveX, Java applet security model, Device APIs & Policy WG (with similar issues in WebApps and HTML WGs), the PowerBox proposal from Mozilla & Sony Ericsson, delegated trust… We looked at the problem again [as recently as 2018](https://www.w3.org/Privacy/permissions-ws-2018/report.html) and there wasn't much progress in terms of the state of the art. In terms of what fell apart: the short version is that we are looking at an absence of autonomy in the processing of personal data and significant data protection impacts. I think broad consent is certainly an interesting way to think about options! I suspect you might have more experience with it than I from your previous work? One key component of broad consent is how what is being consented to is generally quite limited in scope (even if not in details) and still under IRB accountability. I'm not sure that I see it becoming useful in our context because, by the time we've enforced purpose limitations, does consent (of any kind) add something valuable on top? -- GitHub Notification of comment by darobin Please view or discuss this issue at https://github.com/patcg/proposals/issues/5#issuecomment-1061156908 using your GitHub account -- Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config
Received on Monday, 7 March 2022 21:31:12 UTC