RE: W3C Web Crypto WG - Is our deliverable doomed ?

Dear all,
I did not want to distract you from your work when I started this thread. The good aspect is that with the different exchanges we definitely can build a value proposition of our API. Understand how we help security, what is in our power, what is not.
I would be happy to write down a synthesis of arguments there and share it with the group. This could definitely help speaker and promoters of this API.
Regards,
Virginie

From: Seetharama Rao Durbha [mailto:S.Durbha@cablelabs.com]
Sent: mardi 18 septembre 2012 23:01
To: Ryan Sleevi
Cc: Mark Watson; Harry Halpin; GALINDO Virginie; public-webcrypto@w3.org; Wendy Seltzer
Subject: Re: W3C Web Crypto WG - Is our deliverable doomed ?

Well, I guess then we need to bullet list those 'some'. My suspicion is that most of them are (browser) implementation specific and carry no impact to the API itself. As you pointed out earlier, JS has no way of verifying the environment in which it is running, and thus cannot judge the (browser) implementation. So, the application developers themselves will have no additional benefits beyond the API.

On 9/18/12 2:49 PM, "Ryan Sleevi" <sleevi@google.com<mailto:sleevi@google.com>> wrote:

On Tue, Sep 18, 2012 at 1:33 PM, Seetharama Rao Durbha
<S.Durbha@cablelabs.com<mailto:S.Durbha@cablelabs.com>> wrote:
I guess if we ask ourselves "Are we solving any of the issues raised by
Matasano, or are we just providing an API?",  I think the answer is 'just
providing an API'.

I strongly disagree with this, for the reasons I outlined.

I do not believe the problems raised by Matasano are not the same set
of problems you're focusing on.

We are solving *some of* the problems raised. Other parts of the
problems are not at all related to crypto, but related to the web
platform in general - and they're being addressed elsewhere (ex: CSP,
CORS).

This is no different than concerns of malware being addressed by
various solutions, such as vendor-provided application stores, code
signing (including EV code signing), etc. It's not fair to say "we're
just providing an API, not solving the problems".

That said, it's certainly fair to say "We're not solving every single
bullet point in detail," which I hoped was abundantly clear from our
charter to begin with, and certainly what I believe is the right
approach in general. But that's nowhere near the same as just throwing
our hands up and saying "It's just an API" - because it's not.



The implementations could provide a secure RNG, secure storage, etc, but
none of them address Matasano's concerns (malleability, for example). By
pretense, I meant do we at all talk about these things - or just say that is
not our focus and move on.


On 9/18/12 2:04 PM, "Ryan Sleevi" <sleevi@google.com<mailto:sleevi@google.com>> wrote:

On Tue, Sep 18, 2012 at 12:56 PM, Seetharama Rao Durbha
<S.Durbha@cablelabs.com<mailto:S.Durbha@cablelabs.com>> wrote:


One comment inline.

On 9/18/12 11:29 AM, "Ryan Sleevi" <sleevi@google.com<mailto:sleevi@google.com>> wrote:


On Tue, Sep 18, 2012 at 8:53 AM, Mark Watson <watsonm@netflix.com<mailto:watsonm@netflix.com>> wrote:


One of the points missing from the article, which we have considered a lot,
is the fact that it is possible to build systems with useful security
properties, whilst always accepting that we can't really trust the
Javascript code at the client (for the reasons given in the article).


Specifically, we trust the browser code more than the Javascript code and we
trust code in secure elements even more. We take care understand the
security properties we have through the lens of exactly what operations are
being performed by which code and with which data.

This is why the API becomes much more interesting when we introduce concepts
like pre-provisioned keys. Without them, then I fear the API does indeed
suffer from many of the issues identified in the article.

Pre-provisioned keys allow us to bind to something we trust, even if that is
just the browser code, and from there we can infer something useful. Without
that, then any Javascript could be using a malicious polyfill WebCrypto API
and all your security bets are off.

Having said that, it is certainly possible to 'simulate' pre-provisioned
keys (insecurely) in polyfill for test and API validation purposes. I
wouldn't rule out some kind of obfuscation-based JS polyfill implementation
with pre-provisioned keys, but that does sound like a "challenging" project
that I am not about to embark on ;-)

...Mark


Respectfully, I have to disagree with Mark here. I do not think
pre-provisioned keys (smart card or deivce) do not, in themselves, buy any
additional security properties, just as they would not and do not within
native applications.

To take a step back, to see how I get there, let's take a step back and look
at the points raised in the article:

Secure delivery of Javascript to browsers is hard

If you have SSL, just use SSL

Browsers are hostile to cryptography

The prevalence of content-controlled code
The malleability of the Javascript runtime
The lack of systems programming primitives needed to implement crypto

The browser lacks secure random number generation
The browser lacks secure erase
The browser lacks functions with known-timing characteristics
A secure keystore

The crushing weight of the installed base of users

The view-source transparency is illusory

Unlike native applications, Javascript is delivered on demand and thus may
be mutated in time
An exploit server side can compromise many tens or hundreds of thousands of
users

To address these points, let's look at what we have at our disposal.

This work

Our API so far provides secure RNG and functions with known-timing
characteristics, along with a secure keystore. Yes, we don't offer a secure
erase, nor do we offer a generic secure memory comparison, and perhaps those
are things we can look at in the future. But I'd suggest that, given the
general framework of what is brought by the API, it's not as c

I think the bolded statement above is the root of all questions. We are
pretending that the API comes with specific guarantees around crypto
functionality and secure storage. I say that we totally get away from that.
We just say that the API is what it is - just an API - the application MUST
treat a client using these API like any other client - untrusted. Any trust
can come only from external sources that the server application controls -
like in Mark's example


I'm sorry you feel we've been pretending - I certainly haven't meant
there to be any such pretense of that.

Despite that, I think we still offer improvements over a strict
polyfill - the least of all being the ability to support
non-extractable keys.

The only guarantees of functionality or storage we're making are with
respect of the user agent and arbitrary origins. Everything beneath
the user agent is (intentionally) not specified. I agree, if you're
looking for strong assurances of a particular nature, you may need to
know everything from the CPU to the user agent, but I don't think all
applications or consumers are looking for those guarantees. We can
directly meet the needs of applications that do not care, and we can
provide the framework and guarantees for those that do to build out
the guarantees "underneath" the user agent in order to reach their
desired level of assurance.

This is the general issue with concepts like "trust" or "security" -
they mean different things to different people, and a clear definition
(of degree or kind) has yet to emerge. That said, I don't think our
efforts need to focus on such a definition - let's focus on an API
instead :-)

Received on Wednesday, 19 September 2012 09:38:16 UTC