W3C home > Mailing lists > Public > public-webcrypto@w3.org > June 2012

Re: ECC vs RSA, and Similar Conflicts

From: Richard L. Barnes <rbarnes@bbn.com>
Date: Fri, 1 Jun 2012 13:39:35 -0400
Cc: "Da Cruz Pinto, Juan M" <juan.m.da.cruz.pinto@intel.com>, David McGrew <mcgrew@cisco.com>, Eric Rescorla <ekr@rtfm.com>, Anil Saldhana <Anil.Saldhana@redhat.com>, "public-webcrypto@w3.org" <public-webcrypto@w3.org>
Message-Id: <EB569EC6-B6C8-4450-9472-F640E29B5F1D@bbn.com>
To: Seetharama Rao Durbha <S.Durbha@cablelabs.com>
Assuring the server that the client's key is held in a given module is a really non-trivial problem.  If you want to do that, you need to have some form of remote attestation [1].  And for that, you'll need to issue certificates to browser crypto engine instances (or have browsers pass through certs from chips), and have an auditing API so that the crypto engine can prove to the server that it did what the server asked.  So it may be technically a solvable problem to assure servers of key provenance, but it's a long, twisty trail to go down.  Not one that I think is a primary feature.

Ideas like PKCS#11 and "safe/unsafe" API calls are still relevant, though, because they provide other flavors of assurance.  For instance, if I'm a client, and I run code that only makes safe API calls, then I know that my key hasn't been exported somewhere without my knowledge.   


[1] e.g., for TPMs <http://en.wikipedia.org/wiki/Trusted_computing#Remote_attestation>

On Jun 1, 2012, at 11:45 AM, Seetharama Rao Durbha wrote:

> I am not completely convinced how PKCS#11 is applicable to a JavaScript
> Crypto API provided by a browser.
> In case of PKCS#11 usage, keys are built-in through a trusted mechanism
> (manufacturing time or through a
> trusted process). In case of browser, keys are not built-in, and there is
> no way to get the keys into a browser in a way that the server can trust.
> Put another way, the ultimate beneficiary of any assurance we can provide
> on the browser side, is actually the server, so that the server can be
> assured that a client key is secure and not compromised, thus allow access
> to some services exposed by the server.
> But given the nature of HTTP and web, I am not sure server can be given
> that assurance. Take two cases - one where I have developed very secure
> implementation of a JS using the crypto API. Another where I am a rogue
> client that will mimic whatever the JS will do (including any user
> authentication). In the first case, my keys are secure in the browser, and
> in the second case, my keys are known to my rogue client. The question is
> how can the server be assured that one client is based on a secure browser
> implementation and the other is not?
> Also, as more and more services are exposed as web services, there is a
> need to support multiple client types (browser as well as custom client).
> Consider a related case - installing certificates in the browser. There is
> no secret sauce to the protocol, thus any custom client (not a browser)
> can mimic the browser. The assurance to the server does not actually come
> from technology, per se, but from the user - user is expected to treat
> this certificate as securely as possible, otherwise, it is their account
> that will get compromised.
> Seetharama
> On 5/30/12 1:06 PM, "Da Cruz Pinto, Juan M"
> <juan.m.da.cruz.pinto@intel.com> wrote:
>> Keep in mind that PKCS#11 defines an API for accessing crypto operations,
>> one which does not require the caller to have direct access to key
>> material. For instance, most HSM (Hardware Security Modules) vendors
>> provide a PKCS#11 library for developers to integrate with.
>> This means that if you are using a PKCS#11 module, then you don't really
>> need to have safe/unsafe sections of the API when using ,e.g., RSA.
>> Moreover, if you are using a smartcard thru a PKCS#11 module, then you
>> most probably will not be able to access the key material at all.
>> Developers try to avoid manipulating private key material in code for
>> several reasons (it's difficult, security concerns, etc.). Developers
>> might need to access public key material (e.g. in cases where they might
>> need to package signatures and certificates in custom protocols), but not
>> typically private key material.
>> Marcelo.
>> -----Original Message-----
>> From: David McGrew [mailto:mcgrew@cisco.com]
>> Sent: Tuesday, May 29, 2012 17:55
>> To: Richard L. Barnes
>> Cc: Eric Rescorla; Anil Saldhana; public-webcrypto@w3.org
>> Subject: Re: ECC vs RSA, and Similar Conflicts
>> Hi Richard,
>> On May 25, 2012, at 3:39 PM, Richard L. Barnes wrote:
>>> How about this as a compromise:  Split the API into two halves, safe
>>> and unsafe.  The safe methods preserve key isolation, have been reviewed
>>> by Dan, etc.  The unsafe methods might leak key material.
>> I think this dichotomy makes sense.   It seems technically feasible, and
>> as a direction it allows the development of both safe and unsafe APIs in
>> parallel.   
>> Disclaimer: I am not an expert in API security.  It would be good to hear
>> from someone who has been analyzing PKCS#11.
>> David
>>> You can imagine a couple of ways this could be useful...
>>> -- Browsers through big red flags when an app tries to use unsafe
>>> stuff (especially if JS arrived over HTTP)
>>> -- Web sites could publish over HTTPS a manifest of whether they
>>> intend to be safe/unsafe
>>> -- Code/security reviews could focus on unsafe sections of the API
>>> At the very least, if we enforce the discipline of marking methods as
>>> safe or not, then it allows us to move ahead with the API, optionally
>>> kicking out the unsafe methods later.
>>> --Richard
>>> On May 22, 2012, at 11:54 AM, Eric Rescorla wrote:
>>>> On Tue, May 22, 2012 at 2:23 AM, David McGrew <mcgrew@cisco.com> wrote:
>>>>> On May 10, 2012, at 10:36 AM, Anil Saldhana wrote:
>>>>>> Giving direct access to private keys to the JS api is trouble.
>>>>>> I support David's thoughts on just allowing references to IDs of
>>>>>> Private Keys.
>>>>> +1
>>>>> It will also be important that the API itself not allow
>>>>> manipulations of the secret and private keys that allow an attacker
>>>>> to cause one of those keys to be revealed by executing a (possibly
>>>>> convoluted) sequence of operations on it, as has been shown to be
>>>>> the case for PKCS#11 (see for instance
>>>>> <http://www.lsv.ens-cachan.fr/~steel/pkcs11/>)
>>>> David,
>>>> I think this is actually an argument *against* key isolation.
>>>> As soon as protecting the keys becomes a system invariant, then the
>>>> introduction of any new API call requires extensive cryptographic
>>>> review. As I've been putting it lately, "every time you want to add a
>>>> new API point, you need to call Dan Boneh".
>>>> This isn't to say that there is no use for key isolation, but that
>>>> making it a security guarantee of the system is quite expensive in
>>>> terms of design cost.
>>>> -Ekr
Received on Friday, 1 June 2012 17:40:12 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:10 UTC