Use case classification, and associated security models

(apologies in advance for the long email)

As I mentioned during the conference call earlier today, I've been thinking about the various use cases proposed so far from the viewpoint of key management. It seems to me that these break down into three basic cases that a Web crypto API must support, each with subtle differences in the trust model.

Scenario 1: Ephemeral or local-only keys

Some scenarios involve only keys that are generated in the browser by Javascript, and only ever used inside that browser (either within the same session or persisted across sessions). The obvious example is encryption of data for local storage or temporary encryption of in-memory data. The identifying feature of this type of scenario is that the key is only ever used by the app that generates it.

The security model here seems to be that the web app believes its environment to be honest-but-curious or honest-but-coercible and so is trying to mitigate that by adding a layer of security through crypto. For example, if an app trusted the host OS to safeguard a particular piece of stored data, there would be no need for the app to encrypt it (it could just trust the OS to do so if necessary). At the same time, the app trusts the host OS to have some modicum of honesty (otherwise encryption is useless; the host could just steal the data anyways).

Scenario 2: Ephemeral keys obtained through key agreement

Another scenario is where keys are obtained through key exchange or key transport in the app. For instance, consider the use cases where Alice and Bob are trying to converse through an intermediary Carol (who runs the web service brokering the conversation). They would set up some kind of key agreement and then use the agreed key to encrypt bulk traffic. The key exchange may be bootstrapped by some other long-lived key (see scenario 3) or brokered by the service.

Here the security model seems to be that the web app trusts the host environment, but distrusts some remote party (i.e. Carol). It is therefore using the web crypto API to fill a need that transport-level security does not.

In some ways this is similar to Scenario 1 (e.g. local encryption of persisted data is essentially a protocol where current-you is sending data to future-you) but I'm calling it out as separate due to the difference in security models.

Scenario 3: Long-lived keys obtained out-of-band

This covers all the smart card scenarios and other things like credit cards and national IDs. In fact, pretty much anything involving signature or non-repudiation would seem to need this. The human user has a long-lived credential (in the form of a key) that was issued by the service (or someone trusted by the service). The service wants the user to use this key to authenticate and/or encrypt data to provide some assurance against untrusted entities between the user and the service (both the user's client environment and any intervening network entities). In this particular case, the service needs a way to tell the user agent which keys are acceptable, and therefore some sort of key selection method is needed. For asymmetric keys, basing the selection on certificates seems reasonable. For symmetric keys, this is harder - some sort of key ID scheme may be reasonable. In either case, the underlying OS is responsible for locating the key container and the crypto module or provider it's in. This module or provider need not be exposed to the web app at all, though the service may well make some assumptions about its behavior.

In this case the security model is different from the other cases. Here the key container / secure element is the thing that is trusted. The assumption is that the key provisioning process makes it so that only secure elements can contain keys matching the service's selection criteria. On the other hand, the web app and its environment are not necessarily trusted - in extreme cases, the secure element may have its own display and user input mechanisms to verify user consent independent of them.

Use cases involving signature validation are also arguably part of this family, since the trust anchor (e.g. root certificate) is likely provisioned out-of-band as well.


In all the above cases, once a key is obtained, all the actual crypto operations are pretty much the same. So is we define all operations in the API which require a key such that they take a key object as a parameter, then the only difference between the above scenarios (from an API perspective) is the operations used to instantiate that key object. The above 3 scenarios would then correspond to 3 different instantiation methods for key objects:


1.       GenerateKey - create a new key for use with a specific algorithm. Choice of crypto provider left up to the platform.

2.       ImportKey - take a key blob obtained from key agreement and create a key object from it. Choice of crypto provider left up to the platform.

3.       OpenKey - Locate a key on the host system that matches a set of criteria. Choice of crypto provider to be made by platform depending on the location of the key.

There is also a fourth primitive which is often used with scenario 3 - credential enrollment. This would be the operation where the user employs the trusted key to obtain a credential (e.g. enrolling for a smart card certificate by signing a request using one's existing smart card key).

Does that seem reasonable? Any other families of use cases that I'm overlooking?

Received on Tuesday, 12 June 2012 16:19:55 UTC