- From: Ryan Sleevi <sleevi@google.com>
- Date: Tue, 9 Oct 2012 15:11:01 -0700
- To: David Dahl <ddahl@mozilla.com>
- Cc: David Rogers <david.rogers@copperhorses.com>, public-webcrypto@w3.org, hhalpin@w3.org
On Tue, Oct 9, 2012 at 2:13 PM, David Dahl <ddahl@mozilla.com> wrote: > > > ----- Original Message ----- >> From: "Ryan Sleevi" <sleevi@google.com> >> To: "David Dahl" <ddahl@mozilla.com> >> Cc: "David Rogers" <david.rogers@copperhorses.com>, public-webcrypto@w3.org, hhalpin@w3.org >> Sent: Tuesday, October 9, 2012 3:41:22 PM >> Subject: Re: Was: Draft Blog Post on Cryptography API, Now: Potential API recommendation caveats > >> It sounds like your solution offers nothing more than a signature on >> the (initial) code, which is the same as offered by a number of >> existing extension mechanisms (eg: Both Firefox and Chromium) >> >> Again, you make reference to a more "trustworthy" environment, but >> it's unclear what your concerns are that you feel are mitigated here. >> An extension/Open Web App/SysApp that say, calls eval on the result >> of >> an XHR over HTTP, is just as likely to get owned as a web page. > > I think this would be much less likely, but is of course still possible. It is incredibly common and a frequent point of research into extension security. As demonstrated by the literature, the concerns apply to all of the existing extension mechanisms (Chrome, Firefox, Safari). Chrome tried to restrict extensions even more - and other browsers have followed and perhaps surpassed in some areas - but I think it's a fundamental issue. For example - http://www.eecs.berkeley.edu/~afelt/extensionvulnerabilities.pdf Excerpt from Section 3 "We reviewed 100 Google Chrome extensions from the official directory. This set comprised of the 50 most popular extensions and 50 randomly-selected extensions from June 2011. Section 3.1 presents our extension review methodology. Our security review found that 40% of the extensions contain vulnerabilities, and Section 3.2 describes the vulnerabilities. Section 3.3 presents our observation that 31% of developers do not follow even the simplest security best practices." Extensions/Firefox OS apps/"Open Web Apps" can request data over HTTP, rather than HTTPS. They can use innerHTML instead of innerText (thus causing inline script execution). They can use eval rather than json.parse. Neither the Chrome/Chromium nor the Firefox/Firefox OS architecture prevent the malleability of the runtime, which you seem to be asserting. While it may prevent some forms of XSS persistence, Facebook's example shows that extensions can be compromised persistently as well, without invalidating the code signature. I'm not the person to talk about extension security for Chrome, and in previous discussions, I've understood you to make the same claim for yourself regarding Firefox, but I think we must be clear here that while "SysApps" can offer a number of compelling solutions, they are not a single and isolated solution - and in fact, may offer less security than that offered by SSL+CSP. > >> >> While I appreciate the security concern, I feel like there's some >> handwaving here that it's better, and I'm trying to understand the >> concrete concerns here. Is it just that the (initial) code is signed >> (since it can always change later)? > > If the code changes, it was again signed and is again verified upon re-install Incorrect. This can be circumvented through a number of means. > >> That the user explicitly installed >> the extension (which seems wholly unrelated to malleability or any of >> the other security concerns raised) >> > True. > >> What I'm trying to tease out here is what security properties are >> *unique* to what you're proposing that are not already available to >> the web platform, AND why you feel those security properties are >> essential to the API. >> >> To put it differently, if the API required CSP and an HTTPS origin, >> what concerns do you have that fundamentally non-applicable to your >> Extension/"Open Web App" scenario? > > I think a locally installed, verified application fetched from an "honest broker" like Mozilla's or Google's AppStores is far and away a better security risk than a web page - even with HTTPS and CSP. I fear I must re-ask the question. I'm well aware that you consider AppStores to be better security (though I respectfully disagree). What I'm trying to find out is *why* you think that. Is the assumption that all extensions/apps are scanned for "hostile" code or poor security? That's a halting problem - it's an unrealistic expectation. Sure, it may offer the ability for blacklisting, but as any user of any app store ever (Google, Apple, Mozilla, Amazon, Cydia, etc) can tell you, it's not a perfect solution. As I demonstrated above, the "code signing" does in no way prevent mutation of the extension or hot patching - whether the extension starts out innocent (and is compromised) or hostile (and is maliciously abused). So what *else* is useful here? Restricting cookie jars? Privilege separation? These are things already implemented today in browsers such as Chrome and their multi-process model. So I can appreciate Firefox's interest in it, but I think you may be conflating two ideas if you think it's necessary to use "web apps" to get those benefits. Chrome (and even Webkit2 as used by Safari) demonstrate it can be used within origins on the general web. I'm just trying to be pragmatic and tease out what you see our security *requirements* are, as opposed to the method of how they should be accomplished. I think this will be essential to updating the spec appropriately with any recommendations, and is the point multiple members raised during our previous con-call (that we should focus on the security requirements, rather than the implementation details)
Received on Tuesday, 9 October 2012 22:11:29 UTC