Re: W3C Web Crypto WG - Is our deliverable doomed ?

On 9/18/12 12:16 PM, "Harry Halpin" <hhalpin@w3.org<mailto:hhalpin@w3.org>> wrote:

On 09/18/2012 07:51 PM, Ryan Sleevi wrote:


On Tue, Sep 18, 2012 at 10:42 AM, Mark Watson <watsonm@netflix.com<mailto:watsonm@netflix.com>> wrote:

On Sep 18, 2012, at 10:29 AM, Ryan Sleevi wrote:


On Tue, Sep 18, 2012 at 8:53 AM, Mark Watson <watsonm@netflix.com<mailto:watsonm@netflix.com>> wrote:
One of the points missing from the article, which we have considered a lot, is the fact that it is possible to build systems with useful security properties, whilst always accepting that we can't really trust the Javascript code at the client (for the reasons given in the article).

Specifically, we trust the browser code more than the Javascript code and we trust code in secure elements even more. We take care understand the security properties we have through the lens of exactly what operations are being performed by which code and with which data.

This is why the API becomes much more interesting when we introduce concepts like pre-provisioned keys. Without them, then I fear the API does indeed suffer from many of the issues identified in the article.

Pre-provisioned keys allow us to bind to something we trust, even if that is just the browser code, and from there we can infer something useful. Without that, then any Javascript could be using a malicious polyfill WebCrypto API and all your security bets are off.

Having said that, it is certainly possible to 'simulate' pre-provisioned keys (insecurely) in polyfill for test and API validation purposes. I wouldn't rule out some kind of obfuscation-based JS polyfill implementation with pre-provisioned keys, but that does sound like a "challenging" project that I am not about to embark on ;-)

…Mark


Respectfully, I have to disagree with Mark here. I do not think pre-provisioned keys (smart card or deivce) do not, in themselves, buy any additional security properties, just as they would not and do not within native applications.

That's a bold statement which requires only an existence proof to refute.

If at the server I receive a message signed by a pre-provisioned key that I know was placed into a specific hardware module. I know, up to the security of that hardware module, that the message came from code (malware or otherwise) that is able to communicate with that hardware.

This is a security property. And it is useful.

…Mark


In the context of the referred to document, which specifically was discussing the malleability of the content runtime and client script, my position remains the same. A smart card, secure element, or otherwise pre-provisioned key is not a defense against or an added security property for such a set of attacks.

Further, this is no different than native code, which equally has the "malware or otherwise" problem.

Apologies for seeming that I was making an overbroad statement here - certainly, I agree that hardware modules can provide some set of additional properties. But in the context of malware and malleability, I do not believe they're part of the equation.

There are obviously different roots of trust. In fact, what I'm hearing a lot now is that due to the "vetting" of the Apple Store etc., many folks are rooting trust in native apps. However, I would agree with Ryan that with CSP and the Crypto API we can mitigate the content runtime and the client script malleability issue. So, on that level I'm not sure if there's a big difference between a Crypto API for Javascript and a Crypto API for Java or any other programming language, as *every client app* has the ability to be compromised or insecure at the level of code and upgrade paths. So, the deliverable is not doomed, any more so than any other "app", non-Web or Web.

And I would not personally but my trust in the CA system until something like Key Transparency is widely deployed, so again, just use TLS will not work without pinning in the general case and would not work in general for many of our use-cases regardless.

And yes, I'd agree that pre-provisioned keys, smart-cards, etc. can add some element of trust *if* that's where you want to root trust. Rooting trust in the user is actually I think quite dangerous, as every study I have ever looked at shows that users often do not have the right intutitions regarding security aspects of the Web like TLS and reading URIs, much less key management.  I think for most apps we should aim for *key invisibility*, making the use of keys as invisible to the user as possible. Of course, we need to keep the ability for an educated user to check to see what's going on and have ideally the WebApp or browser appear to do "the right thing." Where to draw that line is tricky, but it seems the WG is more willing to have apps draw that line.

When the browser ceases to be a trustable agent, but still used to access sensitive data, the user is placing 'implicit' trust in the browser. The service provider would have instructed the user to follow certain guidelines to access their service over the web – like using a recommended set of browsers, using pre-defined URL(s) to login, checking the 'EV clues' provided by browsers, so on. So, the user has a responsibility to follow directions. The service provider also has a responsibility to vet the browsers/clients they are recommending. So, when my bank is allowing me to check my account online, they are trusting that I am using a valid browser – a browser that does not share my session cookie, for example.

When there are no clues for service providers as to the UA used to access account online – the only trust they can put is in the user. If the users are not sophisticated enough, we will have problems like phishing – nothing that the browser can help with.



   cheers,
     harry

Received on Tuesday, 18 September 2012 20:19:54 UTC