- From: Mike West <mkwst@google.com>
- Date: Thu, 14 Feb 2019 11:48:43 +0100
- To: Pete Snyder <psnyder@brave.com>
- Cc: Christine Runnegar <runnegar@isoc.org>, "public-privacy@w3.org" <public-privacy@w3.org>
- Message-ID: <CAKXHy=c6v0NOADdwV2jxuhf5cU3cECGWfbQ=uFjrwaKDBkhwcg@mail.gmail.com>
Hey, Pete! Thanks for writing this down! I've added some thoughts inline. On Tue, Feb 12, 2019 at 1:40 AM Pete Snyder <psnyder@brave.com> wrote: > Hi All, > > As per our last group call, I’m re-raising the below, responded to TODO > items regarding the private browsing mode document. Please let me know if > there are follow up questions, or anything else I can provide to keep the > conversation moving forward. > > Thanks! > Pete > > > On Jan 18, 2019, at 3:13 PM, Pete Snyder <psnyder@brave.com> wrote: > > ## "Heightened Privacy Mode" and Current Specs > > > > ### High Resolution Timers > > Several specifications currently define interfaces that return high > resolution > > (sub ms accuracy) time measurements (e.g. Navigation Timing, > > Performance Timing). Many parties (e.g. PING, academic and community > attack > > papers) have documented ways that these timers can be leveraged to > violate > > privacy guarantees in the browser (e.g. cache attacks, hardware finger > printing, > > history leaks, specter style memory leaks). > > > > These specifications currently include "wobble" language, all roughly > scraping > > down to "some browsers may decide to add noise or return less precise > > measurements". This is suboptimal for several reasons, including > reducing the > > usefulness of the standard by giving privacy-concerned vendors no common > > alt behavior to standardize around, and harms web compatibility by giving > > web authors no alternate behavior to write around. > > > > A PM section for these specifications might include text along the lines > of: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > SHOULD reduce the resolution of these timers to microsecond level > > resolution. > > > > (I'm not suggesting the above as a specific solution for timer-related > problems, > > I'm just offering it as a motivating example.) > One thing to consider here is that timing issues are somewhat pervasive throughout the platform, and it's not clear that reducing resolution on explicit timers has any real effect on an attacker's ability to precisely-enough time activities with security or privacy impacts ( https://gruss.cc/files/fantastictimers.pdf is a great paper on the topic). Chromium's take on this problem generally is documented in https://chromium.googlesource.com/chromium/src/+/master/docs/security/side-channel-threat-model.md#attenuating-clocks . It might well be the case that users opting into a privacy mode could gain some benefit from the browser coarsening explicit timers' resolution, but I'd like to better understand the threat model you're positing generally. For example, if you're concerned about `:visited` leakage, a more concrete suggestion for browser vendors would be to drop the feature in privacy mode (where it's of limited use in any event, given the general decoupling of profile state that users have come to expect from such a mode). > > ### Canvas > > Many finger printing attacks use subtle implementation and hardward > differences > > in how identical canvas instructions are rendered as a finger printing > > mechanism. A PM section in the canvas section of the HTML spec might read > > something like the following then: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > should not implement the `HTMLCanvasElement.prototype.toDataURL` > > and `HTMLCanvasElement.prototype.toBlob` methods. When in PM, > calling these > > methods should throw a `PrivacyProtection` exception. > > > > Again, the text above is not meant to suggest a specific solution, only > how > > any given specific solution could be integrated into a standard. > That's certainly an approach that can be effective against canvas fingerprinting. It does make certain applications impossible to use in privacy mode, however (consider something simple like https://squoosh.app/), which is a real tradeoff. It might well be the right tradeoff for many users, but I think it would be helpful to help browser vendors weigh those tradeoffs in any recommendations generally, rather than presenting the problem as one with a binary solution. In this specific case, for example, alternate suggestions (perhaps gating the APIs in third-party contexts, or allowing them only in response to user activation?) might have similar privacy-protecting value with less user-visible breakage. > ### Web Audio > > Many end points in the Web Audio API reveal details about the underlying > > hardware, which is also frequently used to fingerprint users. A PM > section > > in this spec might read: > > > > When the user has made a request for heightened privacy by using a > > privacy mode, or by selecting a privacy-oriented browser, implementors > > should not return complete information about the device's audio > hardware. > > Instead, the relevant API's should return one of the following three > > profiles of audio information, selecting the highest functionality > > one that the user's hardware matches. (table below...) > Similar to the discussion above, WebAudio can be used to fingerprint a user's hardware, but can also generate audio. On the web. :) > > ## Comparison to "User Data Controls in Web Browsers" Draft > > While they have some overlapping goals, I think the PM suggestion is > different > > from the existing "User Data Controls in Web Browsers" (UDC) draft, in > several > > fundamental ways: > > > > 1) The UDC draft focuses on giving users more ways of controlling the > lifetime > > and sharing of information generated during the user's browsing > activities. > > The PM suggestion, on the other hand, aims to give spec authors a > common hook > > for describing alternate API behavior. This overlaps in some areas, > but > > in general seems to tackle very different goals. > > 2) Because it focuses on aggregated user data, the UDC specifically rules > > fingerprinting concerns out of scope. The PM idea does not share that > > restriction, and is, in part, aimed at giving standards authors ways > > of defining reduced finger printable API surfaces. > > 3) The UDC spec envisions additional user controls (e.g. sliders) to give > > users new toggles to describe how much information leaves / persists > > on their machine. The PM suggestion is targeting ways that existing, > > binary signals (e.g. is the user in a privacy mode or not) can be > leveraged > > to improve the level of privacy, and standard-ness, of standards. > > > > > > ## Create Private Browsing Repo > > Done: https://github.com/w3cping/privacy-mode > I skimmed this doc <https://github.com/w3cping/privacy-mode/blob/master/private-browsing.md>, and I'm a bit confused about its purpose. Is the goal to serve as a definition of "privacy mode" that other specifications can link to? And a set of examples around which specification authors can build "Privacy Mode Considerations" sections to their specifications? The document explicitly disclaims documentation of shared threat models, which seems to me to be the most valuable service the document could offer (and, indeed, seems to be what you're aiming for with the discussion above). What would you like this document to be? How would you like it to be used? Thanks again for pushing this conversation forward! -mike > > > > > >> On Jan 15, 2019, at 1:21 PM, Christine Runnegar <runnegar@isoc.org> > wrote: > >> > >> Thank you to those who joined the call today. > >> > >> The draft minutes are available here: > https://www.w3.org/2019/01/15-privacy-minutes.html > >> > >> Action items from the call. > >> > >> - Pete S will consider Mark’s N draft - User Data Controls in Web > Browsers - and see if there is anything to add or that would benefit from > additional discussion at this stage. He will also do a rough write-up of > two examples to help the group consider the more preferred way forward for > a document. Those examples will be: resolution for timers and canvas read > back. They will be shared on this email list. > >> > >> - When it makes sense (probably sooner rather than later), we will move > the text to Github to facilitate contributions and issue tracking. > >> > >> - In the meantime, please give more thought to Pete’s proposed document > and please share any feedback you may have, including any new ideas you may > have in this area, on the email list. > >> > >> Many thanks to Pete for leading this effort, and to Jason, Nick and > others for their very generous contributions. > >> > >> Christine (co-chair) > > > > >
Received on Thursday, 14 February 2019 10:49:18 UTC