- From: Seetharama Rao Durbha <S.Durbha@cablelabs.com>
- Date: Fri, 1 Jun 2012 09:45:16 -0600
- To: "Da Cruz Pinto, Juan M" <juan.m.da.cruz.pinto@intel.com>, David McGrew <mcgrew@cisco.com>, "Richard L. Barnes" <rbarnes@bbn.com>
- CC: Eric Rescorla <ekr@rtfm.com>, Anil Saldhana <Anil.Saldhana@redhat.com>, "public-webcrypto@w3.org" <public-webcrypto@w3.org>
I am not completely convinced how PKCS#11 is applicable to a JavaScript Crypto API provided by a browser. In case of PKCS#11 usage, keys are built-in through a trusted mechanism (manufacturing time or through a trusted process). In case of browser, keys are not built-in, and there is no way to get the keys into a browser in a way that the server can trust. Put another way, the ultimate beneficiary of any assurance we can provide on the browser side, is actually the server, so that the server can be assured that a client key is secure and not compromised, thus allow access to some services exposed by the server. But given the nature of HTTP and web, I am not sure server can be given that assurance. Take two cases - one where I have developed very secure implementation of a JS using the crypto API. Another where I am a rogue client that will mimic whatever the JS will do (including any user authentication). In the first case, my keys are secure in the browser, and in the second case, my keys are known to my rogue client. The question is how can the server be assured that one client is based on a secure browser implementation and the other is not? Also, as more and more services are exposed as web services, there is a need to support multiple client types (browser as well as custom client). Consider a related case - installing certificates in the browser. There is no secret sauce to the protocol, thus any custom client (not a browser) can mimic the browser. The assurance to the server does not actually come from technology, per se, but from the user - user is expected to treat this certificate as securely as possible, otherwise, it is their account that will get compromised. Seetharama On 5/30/12 1:06 PM, "Da Cruz Pinto, Juan M" <juan.m.da.cruz.pinto@intel.com> wrote: >Keep in mind that PKCS#11 defines an API for accessing crypto operations, >one which does not require the caller to have direct access to key >material. For instance, most HSM (Hardware Security Modules) vendors >provide a PKCS#11 library for developers to integrate with. > >This means that if you are using a PKCS#11 module, then you don't really >need to have safe/unsafe sections of the API when using ,e.g., RSA. >Moreover, if you are using a smartcard thru a PKCS#11 module, then you >most probably will not be able to access the key material at all. > >Developers try to avoid manipulating private key material in code for >several reasons (it's difficult, security concerns, etc.). Developers >might need to access public key material (e.g. in cases where they might >need to package signatures and certificates in custom protocols), but not >typically private key material. > >Marcelo. > >-----Original Message----- >From: David McGrew [mailto:mcgrew@cisco.com] >Sent: Tuesday, May 29, 2012 17:55 >To: Richard L. Barnes >Cc: Eric Rescorla; Anil Saldhana; public-webcrypto@w3.org >Subject: Re: ECC vs RSA, and Similar Conflicts > >Hi Richard, > >On May 25, 2012, at 3:39 PM, Richard L. Barnes wrote: > >> How about this as a compromise: Split the API into two halves, safe >>and unsafe. The safe methods preserve key isolation, have been reviewed >>by Dan, etc. The unsafe methods might leak key material. >> > >I think this dichotomy makes sense. It seems technically feasible, and >as a direction it allows the development of both safe and unsafe APIs in >parallel. > >Disclaimer: I am not an expert in API security. It would be good to hear >from someone who has been analyzing PKCS#11. > >David > >> You can imagine a couple of ways this could be useful... >> -- Browsers through big red flags when an app tries to use unsafe >> stuff (especially if JS arrived over HTTP) >> -- Web sites could publish over HTTPS a manifest of whether they >> intend to be safe/unsafe >> -- Code/security reviews could focus on unsafe sections of the API >> >> At the very least, if we enforce the discipline of marking methods as >>safe or not, then it allows us to move ahead with the API, optionally >>kicking out the unsafe methods later. >> >> --Richard >> >> >> >> >> On May 22, 2012, at 11:54 AM, Eric Rescorla wrote: >> >>> On Tue, May 22, 2012 at 2:23 AM, David McGrew <mcgrew@cisco.com> wrote: >>>> On May 10, 2012, at 10:36 AM, Anil Saldhana wrote: >>>> >>>>> Giving direct access to private keys to the JS api is trouble. >>>>> >>>>> I support David's thoughts on just allowing references to IDs of >>>>>Private Keys. >>>> >>>> +1 >>>> >>>> It will also be important that the API itself not allow >>>> manipulations of the secret and private keys that allow an attacker >>>> to cause one of those keys to be revealed by executing a (possibly >>>> convoluted) sequence of operations on it, as has been shown to be >>>> the case for PKCS#11 (see for instance >>>> <http://www.lsv.ens-cachan.fr/~steel/pkcs11/>) >>> >>> David, >>> >>> I think this is actually an argument *against* key isolation. >>> >>> As soon as protecting the keys becomes a system invariant, then the >>> introduction of any new API call requires extensive cryptographic >>> review. As I've been putting it lately, "every time you want to add a >>> new API point, you need to call Dan Boneh". >>> >>> This isn't to say that there is no use for key isolation, but that >>> making it a security guarantee of the system is quite expensive in >>> terms of design cost. >>> >>> -Ekr >>> >> > > > >
Received on Friday, 1 June 2012 15:52:17 UTC