- From: Nick Doty <npdoty@w3.org>
- Date: Tue, 30 Jun 2015 17:49:44 -0400
- To: Eric Rescorla <ekr@rtfm.com>, Mike O'Neill <michael.oneill@baycloud.com>
- Cc: "public-privacy (W3C mailing list)" <public-privacy@w3.org>, Jan-Ivar Bruaroey <jib@mozilla.com>
- Message-Id: <FB640085-DFA3-41E5-AA0E-04ACB8F65B3B@w3.org>
On Jun 30, 2015, at 5:29 PM, Eric Rescorla <ekr@rtfm.com> wrote: > > On Tue, Jun 30, 2015 at 2:24 PM, Mike O'Neill <michael.oneill@baycloud.com <mailto:michael.oneill@baycloud.com>> > wrote: > >> Nick, the problem here is transparency. The identifier created is >> invisible unless the UA reports via some new UI that script has accessed a >> MediaDeviveInfo (say by calling enumerateDevices). At least with cookies >> once a tracker has placed them they are always visible via standard UA UI, >> and controllable somewhat by the user . IMO to have equivalent transparency >> then the required UI to report it should be explicitly called for in the >> recommendation. I don't think there's currently very effective transparency about cookies or other forms of local storage either. (For an empirical study of one -- me -- I can find the cookies stored in my browser for a domain after a few clicks, but I don't have any ambient awareness of them and I'm not sure where I would find localStorage entries or IndexedDB, etc.) Letting users know that a site is storing data (including a tracking identifier that will recognize them on subsequent visits) is a task for user agents with many different mechanisms, including cookies. It might be nice to write up advice for that in general, especially if you could get browser vendors interested in it. Personally, I would be more interested to know the binary "is data being stored by this site?" rather than trying to decipher the meaning of cookie values, local storage entries, or the values of exposed identifiers. > The same is true of (say) canvas fingerprinting. That's true, many fingerprinting mechanisms are difficult to provide transparency for. I don't take that as an argument that all future features should be equally so, such that no improvement in any feature can improve the overall situation. Canvas has seen some blocking for that reason; wouldn't it be better to avoid that with WebRTC if it's possible to do so? >> The webRTC standard is very troublesome from a privacy standpoint in other >> ways. Not only all your local IP addresses (inside the NAT) visible, > > I'd really like to see a good analysis for why you think addresses behind > the NAT are > especially sensitive: > > 1. They are drawn from very small set of addresses (comparatively). > 2. They change every time your public IP address changes. > > VPNs are different, of course, but then it's not clear how good a job VPNs > do > of preserving privacy. I think Wendy has been starting a list for us of some of the concerns around local IP address discoverability. On the NAT question, off the top of my head I'm aware of the problems of: 1) easier browser fingerprinting; 2) facilitating attacks on local network infrastructure. VPNs are indeed different! I'm no expert in that particular question, but I imagine some people would disagree with the conclusion that because there are some privacy risks even when using a VPN that there's no privacy harm in exposing IP addresses that users are obfuscating. >> The Ad Blockers are already discussing blocks for WebRTC and this is bound >> to accelerate. > > That seems like the correct way to handle concerns of this type. > >> If WebRTC was only available after an explicit permission this may be >> headed off. At least there should be a restriction (enforced by UAs) not to >> allow drive-by WebRTC calls when DNT is set. > > Overloading DNT like this seems like a fairly bad idea. Agreed that overloading an HTTP signal doesn't seem like a good idea. Of course, browsers may use private browsing modes or other indicators (which could be correlated or connected to a user's DNT preference) to decide when to expose information or block access to APIs. That's likely to be a user-agent-specific question, rather than something for the spec. I would prefer it if we could design APIs as much as possible so that browsers don't need to make a choice between functionality and exposing additional information about a user. Yes, users should be able to just turn off functionality together and sometimes will feel the need to do so, but if we can design a feature so that a large number of use cases don't require such a trade-off, then we can benefit from that. As it is now, do user agents that want to block access to a list of attached webcam devices have to completely block use of WebRTC, even when there's a permission grant? —Nick
Received on Tuesday, 30 June 2015 21:49:53 UTC