- From: Jeffrey Walton <noloader@gmail.com>
- Date: Thu, 19 Feb 2015 18:19:25 -0500
- To: Anne van Kesteren <annevk@annevk.nl>
- Cc: public-webapps WG <public-webapps@w3.org>
On Thu, Feb 19, 2015 at 4:31 PM, Anne van Kesteren <annevk@annevk.nl> wrote: > On Thu, Feb 19, 2015 at 10:05 PM, Jeffrey Walton <noloader@gmail.com> wrote: >> For what its worth, I'm just the messenger. There are entire >> organizations with Standard Operating Procedures (SOPs) built around >> the stuff I'm talking about. I'm telling you what they do based on my >> experiences. > > From your arguments though it sounds like they would be fine with > buying PCs from Lenovo with installed spyware, which makes it all > rather dubious. You can't cite the Lenovo case as a failure of > browsers when it's a compromised client. > No :) The organizations I work with have SOPs in place to address that. They would not be running an unapproved image in the first place. *If* the user installed a CA for interception purposes, then yes, I would blame the platform. The user does not set organizational policies, and its not acceptable the browser allow the secure channel to be subverted by an externality. I think the secret ingredient that is missing in the browser secret sauce is a Key Usage of INTERCEPTION. This way, a user who installs a certificate without INTERCEPTION won't be able to use it for interception because the browser won't break a known good pinset without it. And users who install one with INTERCEPTION will know what they are getting. I know it sounds like Steve Bellovin's Evil Bit RFC (Aril Fools Day RFC), but that's what the security model forces us into because we can't differentiate between the "good" bad guys and the "bad" guys. In native apps (and sometimes hybrid apps), we place a control to ensure that does not happen. We are not encumbered by the broken security model. Jeff
Received on Thursday, 19 February 2015 23:19:52 UTC