- From: Pete Snyder <psnyder@brave.com>
- Date: Fri, 15 Feb 2019 17:17:31 -0800
- To: Mike West <mkwst@google.com>
- Cc: Christine Runnegar <runnegar@isoc.org>, "public-privacy@w3.org" <public-privacy@w3.org>
Howdy Mike, Thanks much for you thoughts, greatly appreciated! I’m going to try and group the items you brought up, but if I miss something, please excuse as ignorance, not arrogance ;) ## The the practical concerns in the document I only meant these as examples of how a “higher privacy mode” spec could be used in existing spec, not to advocate (here, at least) for any specific modifications. So, for now, lets put aside whether the fingerprint-abiltiy of WebAudio / Canvas / timers should be solve / is solvable. ## Goals of the doc That valuable feedback, I can do another pass to try and make the goals clearer. But, the two, related but distinct, goals are: 1) To give spec authors a common term / doc / “thing” to refer to when they want to signify “sometimes users want to move to a different place on the privacy / utility trade off curve.” Right now that seems to be done (I’ve been told…) with links to the wikipedia article for Private Browser mode, or handwave-y, whats-the-point-of-a-standard-again sections that say “vendors might do things differently for privacy reasons”, that serves no ones interests. The dream here would be to give spec authors a way of defining a second, in spec, more-privacy-preserving path through their standard. (Without devolving into a million permission dials) 2) To improve web compatibility, by giving privacy-oriented browsers the above mentioned, in-standard, privacy preserving implementation option to cohere around, even if its only a privacy “floor”. Right now everyone implements the “privacy preserving vendors can differ as they see fit” different, causing constant web-compat problems. Having a common “if you want to implement / enable this standard, but gain some more privacy at the cost of some functionality” target would help greatly. The above is all real-informal, so if anything is unclear, please let me know, I can try to be only medium-informal :) Pete > On Feb 14, 2019, at 2:48 AM, Mike West <mkwst@google.com> wrote: > > Hey, Pete! Thanks for writing this down! I've added some thoughts inline. > > On Tue, Feb 12, 2019 at 1:40 AM Pete Snyder <psnyder@brave.com> wrote: > Hi All, > > As per our last group call, I’m re-raising the below, responded to TODO items regarding the private browsing mode document. Please let me know if there are follow up questions, or anything else I can provide to keep the conversation moving forward. > > Thanks! > Pete > > > On Jan 18, 2019, at 3:13 PM, Pete Snyder <psnyder@brave.com> wrote: > > ## "Heightened Privacy Mode" and Current Specs > > > > ### High Resolution Timers > > Several specifications currently define interfaces that return high resolution > > (sub ms accuracy) time measurements (e.g. Navigation Timing, > > Performance Timing). Many parties (e.g. PING, academic and community attack > > papers) have documented ways that these timers can be leveraged to violate > > privacy guarantees in the browser (e.g. cache attacks, hardware finger printing, > > history leaks, specter style memory leaks). > > > > These specifications currently include "wobble" language, all roughly scraping > > down to "some browsers may decide to add noise or return less precise > > measurements". This is suboptimal for several reasons, including reducing the > > usefulness of the standard by giving privacy-concerned vendors no common > > alt behavior to standardize around, and harms web compatibility by giving > > web authors no alternate behavior to write around. > > > > A PM section for these specifications might include text along the lines of: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > SHOULD reduce the resolution of these timers to microsecond level > > resolution. > > > > (I'm not suggesting the above as a specific solution for timer-related problems, > > I'm just offering it as a motivating example.) > > One thing to consider here is that timing issues are somewhat pervasive throughout the platform, and it's not clear that reducing resolution on explicit timers has any real effect on an attacker's ability to precisely-enough time activities with security or privacy impacts (https://gruss.cc/files/fantastictimers.pdf is a great paper on the topic). > > Chromium's take on this problem generally is documented in https://chromium.googlesource.com/chromium/src/+/master/docs/security/side-channel-threat-model.md#attenuating-clocks. > > It might well be the case that users opting into a privacy mode could gain some benefit from the browser coarsening explicit timers' resolution, but I'd like to better understand the threat model you're positing generally. For example, if you're concerned about `:visited` leakage, a more concrete suggestion for browser vendors would be to drop the feature in privacy mode (where it's of limited use in any event, given the general decoupling of profile state that users have come to expect from such a mode). > > > ### Canvas > > Many finger printing attacks use subtle implementation and hardward differences > > in how identical canvas instructions are rendered as a finger printing > > mechanism. A PM section in the canvas section of the HTML spec might read > > something like the following then: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > should not implement the `HTMLCanvasElement.prototype.toDataURL` > > and `HTMLCanvasElement.prototype.toBlob` methods. When in PM, calling these > > methods should throw a `PrivacyProtection` exception. > > > > Again, the text above is not meant to suggest a specific solution, only how > > any given specific solution could be integrated into a standard. > > That's certainly an approach that can be effective against canvas fingerprinting. It does make certain applications impossible to use in privacy mode, however (consider something simple like https://squoosh.app/), which is a real tradeoff. It might well be the right tradeoff for many users, but I think it would be helpful to help browser vendors weigh those tradeoffs in any recommendations generally, rather than presenting the problem as one with a binary solution. > > In this specific case, for example, alternate suggestions (perhaps gating the APIs in third-party contexts, or allowing them only in response to user activation?) might have similar privacy-protecting value with less user-visible breakage. > > > ### Web Audio > > Many end points in the Web Audio API reveal details about the underlying > > hardware, which is also frequently used to fingerprint users. A PM section > > in this spec might read: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > should not return complete information about the device's audio hardware. > > Instead, the relevant API's should return one of the following three > > profiles of audio information, selecting the highest functionality > > one that the user's hardware matches. (table below...) > > Similar to the discussion above, WebAudio can be used to fingerprint a user's hardware, but can also generate audio. On the web. :) > > > ## Comparison to "User Data Controls in Web Browsers" Draft > > While they have some overlapping goals, I think the PM suggestion is different > > from the existing "User Data Controls in Web Browsers" (UDC) draft, in several > > fundamental ways: > > > > 1) The UDC draft focuses on giving users more ways of controlling the lifetime > > and sharing of information generated during the user's browsing activities. > > The PM suggestion, on the other hand, aims to give spec authors a common hook > > for describing alternate API behavior. This overlaps in some areas, but > > in general seems to tackle very different goals. > > 2) Because it focuses on aggregated user data, the UDC specifically rules > > fingerprinting concerns out of scope. The PM idea does not share that > > restriction, and is, in part, aimed at giving standards authors ways > > of defining reduced finger printable API surfaces. > > 3) The UDC spec envisions additional user controls (e.g. sliders) to give > > users new toggles to describe how much information leaves / persists > > on their machine. The PM suggestion is targeting ways that existing, > > binary signals (e.g. is the user in a privacy mode or not) can be leveraged > > to improve the level of privacy, and standard-ness, of standards. > > > > > > ## Create Private Browsing Repo > > Done: https://github.com/w3cping/privacy-mode > > I skimmed this doc, and I'm a bit confused about its purpose. > > Is the goal to serve as a definition of "privacy mode" that other specifications can link to? And a set of examples around which specification authors can build "Privacy Mode Considerations" sections to their specifications? The document explicitly disclaims documentation of shared threat models, which seems to me to be the most valuable service the document could offer (and, indeed, seems to be what you're aiming for with the discussion above). > > What would you like this document to be? How would you like it to be used? > > Thanks again for pushing this conversation forward! > > -mike > > > > > > >> On Jan 15, 2019, at 1:21 PM, Christine Runnegar <runnegar@isoc.org> wrote: > >> > >> Thank you to those who joined the call today. > >> > >> The draft minutes are available here: https://www.w3.org/2019/01/15-privacy-minutes.html > >> > >> Action items from the call. > >> > >> - Pete S will consider Mark’s N draft - User Data Controls in Web Browsers - and see if there is anything to add or that would benefit from additional discussion at this stage. He will also do a rough write-up of two examples to help the group consider the more preferred way forward for a document. Those examples will be: resolution for timers and canvas read back. They will be shared on this email list. > >> > >> - When it makes sense (probably sooner rather than later), we will move the text to Github to facilitate contributions and issue tracking. > >> > >> - In the meantime, please give more thought to Pete’s proposed document and please share any feedback you may have, including any new ideas you may have in this area, on the email list. > >> > >> Many thanks to Pete for leading this effort, and to Jason, Nick and others for their very generous contributions. > >> > >> Christine (co-chair) > >
Received on Saturday, 16 February 2019 01:17:57 UTC