- From: Pete Snyder <psnyder@brave.com>
- Date: Fri, 18 Jan 2019 15:13:47 -0800
- To: Christine Runnegar <runnegar@isoc.org>
- Cc: "public-privacy@w3.org" <public-privacy@w3.org>
Hi all, In the last call, I agreed to do three things, 1) present specific, real world examples of where I think a "privacy mode" (PM) document could be applied to existing standards, 2) discuss how this "privacy mode" proposal compares to the existing "User Data Controls in Web Browsers" draft, and 3) add my current "privacy mode" draft to the PING github repo. I hope this email accomplishes each of those goals :) ## "Heightened Privacy Mode" and Current Specs ### High Resolution Timers Several specifications currently define interfaces that return high resolution (sub ms accuracy) time measurements (e.g. Navigation Timing, Performance Timing). Many parties (e.g. PING, academic and community attack papers) have documented ways that these timers can be leveraged to violate privacy guarantees in the browser (e.g. cache attacks, hardware finger printing, history leaks, specter style memory leaks). These specifications currently include "wobble" language, all roughly scraping down to "some browsers may decide to add noise or return less precise measurements". This is suboptimal for several reasons, including reducing the usefulness of the standard by giving privacy-concerned vendors no common alt behavior to standardize around, and harms web compatibility by giving web authors no alternate behavior to write around. A PM section for these specifications might include text along the lines of: When the user has made a request for heightened privacy by using a privacy mode, or by selecting a privacy-oriented browser, implementors SHOULD reduce the resolution of these timers to microsecond level resolution. (I'm not suggesting the above as a specific solution for timer-related problems, I'm just offering it as a motivating example.) ### Canvas Many finger printing attacks use subtle implementation and hardward differences in how identical canvas instructions are rendered as a finger printing mechanism. A PM section in the canvas section of the HTML spec might read something like the following then: When the user has made a request for heightened privacy by using a privacy mode, or by selecting a privacy-oriented browser, implementors should not implement the `HTMLCanvasElement.prototype.toDataURL` and `HTMLCanvasElement.prototype.toBlob` methods. When in PM, calling these methods should throw a `PrivacyProtection` exception. Again, the text above is not meant to suggest a specific solution, only how any given specific solution could be integrated into a standard. ### Web Audio Many end points in the Web Audio API reveal details about the underlying hardware, which is also frequently used to fingerprint users. A PM section in this spec might read: When the user has made a request for heightened privacy by using a privacy mode, or by selecting a privacy-oriented browser, implementors should not return complete information about the device's audio hardware. Instead, the relevant API's should return one of the following three profiles of audio information, selecting the highest functionality one that the user's hardware matches. (table below...) ## Comparison to "User Data Controls in Web Browsers" Draft While they have some overlapping goals, I think the PM suggestion is different from the existing "User Data Controls in Web Browsers" (UDC) draft, in several fundamental ways: 1) The UDC draft focuses on giving users more ways of controlling the lifetime and sharing of information generated during the user's browsing activities. The PM suggestion, on the other hand, aims to give spec authors a common hook for describing alternate API behavior. This overlaps in some areas, but in general seems to tackle very different goals. 2) Because it focuses on aggregated user data, the UDC specifically rules fingerprinting concerns out of scope. The PM idea does not share that restriction, and is, in part, aimed at giving standards authors ways of defining reduced finger printable API surfaces. 3) The UDC spec envisions additional user controls (e.g. sliders) to give users new toggles to describe how much information leaves / persists on their machine. The PM suggestion is targeting ways that existing, binary signals (e.g. is the user in a privacy mode or not) can be leveraged to improve the level of privacy, and standard-ness, of standards. ## Create Private Browsing Repo Done: https://github.com/w3cping/privacy-mode > On Jan 15, 2019, at 1:21 PM, Christine Runnegar <runnegar@isoc.org> wrote: > > Thank you to those who joined the call today. > > The draft minutes are available here: https://www.w3.org/2019/01/15-privacy-minutes.html > > Action items from the call. > > - Pete S will consider Mark’s N draft - User Data Controls in Web Browsers - and see if there is anything to add or that would benefit from additional discussion at this stage. He will also do a rough write-up of two examples to help the group consider the more preferred way forward for a document. Those examples will be: resolution for timers and canvas read back. They will be shared on this email list. > > - When it makes sense (probably sooner rather than later), we will move the text to Github to facilitate contributions and issue tracking. > > - In the meantime, please give more thought to Pete’s proposed document and please share any feedback you may have, including any new ideas you may have in this area, on the email list. > > Many thanks to Pete for leading this effort, and to Jason, Nick and others for their very generous contributions. > > Christine (co-chair)
Received on Friday, 18 January 2019 23:14:12 UTC