- From: Eric Mill <eric@konklone.com>
- Date: Sun, 15 Feb 2015 23:17:31 -0500
- To: www-tag@w3.org
(First ever post to the list! And most msgs aren't in my inbox cache. Sorry for misformatting.) > I don't mean to be snarky, but merely to highlight that this is a problem > regardless of whether you're talking CA certificates, split browsers, > extensions, browser helpers, performance tuners, registry cleaners, ram > doublers, free games, desktop buddies, or any number of the hundreds of other > things people will download and run on their machines. Of those, only CA certificates and extensions are the things that browsers let users download and install _into_ the browser. (Not sure what "browser helpers" are.) Chrome and Firefox have defined security and permission models around extensions. They're treated a bit like smartphone apps. They can wield power, and browsers attempt to define, constrain, or otherwise take some measure of responsibility for that power. It's still not a great situation, but it's not the total wild west. Installing a third-party root certificate gives that third party wild powers over the user's browsing experience from then onwards, and through indirect means could affect subsequent downloads and installations of others' software. It's not physical access, but in today's world it's basically like putting on an Oculus Rift running unknown code and unknown apps. > I would strongly disagree that this is, by any means, some "undocumented > feature of the Web platform". In the all-HTTPS web we're trying to get to, how the trust store is managed is a feature of the Web platform. There are so very many undocumented features of the Web platform, and there's a strong correlation between how undocumented they are and how exploitative they become. When you visit a "modern web application", and your browser spiders out to a dozen or two third party services and passes all kinds of information about your visit. Understanding what just happened to you requires immense technical expertise, or reading still-pretty-dense research[1], and isn't indicated in browser chrome in any way. (It's also amazing that OCSP revocation checks have never, to my knowledge, been made visible even in developer tools. I've made websites for 17 years now, and I learned that some browsers ping OCSP endpoints 1 year ago.) The general population has _no idea_ that when they buy a piece of clothing on Amazon that they've now provided information on their age and demographic to a website 20 clicks from then. Is it any wonder that Do Not Track has so little traction? Most people don't know what it's protecting, weren't aware there was a problem, and don't know DNT exists anyway. Third party tracking has remained a debate between different classes of elites, fighting over what the masses should be able to see and touch. For third party content control to go the same route would be an even more dangerous outcome. > Because this is a question of how the Web is presented to and understood by > end users, and the W3C firmly owns that, not the IETF. Clearly you don't like > it as a venue, but it's what we've got. Though if the WHATWG has an opinion, > I'd be happy to chat. 100% agree. This isn't a protocol issue. It's a user education problem and an application-layer responsibility, and one whose importance we've all decided to ratchet way the hell up by pushing the web towards HTTPS for everything. [1] https://www.eff.org/deeplinks/2015/01/healthcare.gov-sends-personal-data
Received on Monday, 16 February 2015 04:18:41 UTC