Re: Use case classification, and associated security models

I didn't mean to suggest that smart cards themselves can protect users from
malicious servers, but rather that the browser can fairly easily protect
users from malicious servers in use cases where smart cards (or any other
opaque keys) are being used for signing or authentication. And that it
seems far more complicated for the browser to provide this protection in
the encryption use case.

More specifically, I agree that Philip's use case, like the encryption use
cases, can only involve honest-but-curious servers. But in use cases where
the user is the one doing the signing, like where the user signs a tax
return before submitting it, a malicious server might want to trick the
user into signing some different statement, and the browser can protect the
user from this by prompting the user to agree to whatever they sign. An
honest user can protect himself against this server by using a compliant
browser.

On Wed, Jun 13, 2012 at 10:00 AM, Vijay Bharadwaj <
Vijay.Bharadwaj@microsoft.com> wrote:

>  Yes, this (i.e. isolated decryption output) seems a little bit out of
> scope.****
>
> ** **
>
> As to Jim’s use cases (and Philip’s case in the other branch of this
> thread) I believe this comes up a lot in settings where the server side of
> the HTTP connection is really an honest-but-curious intermediary. For
> example, a doctor (who presumably knows everything about your medical
> report) encrypts the report and places it on an honest-but-curious CDN for
> you to retrieve.****
>
> ** **
>
> I think the cases you point out with authentication and signing are
> similar; you can protect against an honest-but corruptible server who is
> really an intermediary to the transaction, but you can’t really do much
> about a malicious server who terminates the transaction (especially if the
> server is also delivering the Javascript). At best, you can audit the
> server later and repudiate certain transactions, but that is a separate
> operation. In general, I think of smart card authentication as primarily a
> means for the server to protect itself against bad users or user agents by
> providing non-repudiation. I haven’t seen any use cases where it was used
> to prevent a malicious server from harming a legitimate user (as opposed to
> detecting such harm through later auditing) – if you have any such use
> cases in mind, it would be interesting to discuss them.****
>
> ** **
>
> *From:* Emily Stark [mailto:estark@MIT.EDU]
> *Sent:* Tuesday, June 12, 2012 2:11 PM
> *To:* Davenport, James L.
> *Cc:* public-webcrypto@w3.org
> *Subject:* Re: Use case classification, and associated security models****
>
> ** **
>
> These use cases should be qualified by the assumption that all the servers
> storing the encrypted data are honest-but-curious, right? When a smart card
> is used for authentication or signing, it seems possible to protect the
> user from a malicious server (as is being discussed in other threads: by
> having the browser obtain the user's permission to authenticate with a
> particular key, or by having the browser require the user to approve the
> statement being signed), but this protection doesn't seem easily obtainable
> when the key on the smart card is being used for decryption. Almost
> certainly out-of-scope, but it might be nice to have a decryption operation
> that outputs the plaintext in some UI that's isolated from the application
> (instead of returning the plaintext to the application), and some kind of
> ciphertext that can only be decryptable by this isolated-output decryption
> operation.****
>
> ** **
>
> Emily****
>
> On Tue, Jun 12, 2012 at 12:54 PM, Davenport, James L. <jdavenpo@mitre.org>
> wrote:****
>
> Excellent summary, Vijay.****
>
>  ****
>
> I’d like to add a few use cases to your “keys obtained out-of-band” since
> I believe it’s important to show the API allowing out-of-band-keys (and
> only certain out-of-bound-keys) to be used for decrypting data that has
> been encrypted. ****
>
>  ****
>
> The following use cases involve persistently storing data in an encrypted
> form on a web app, clients retrieving the encrypted data, and using the
> Crypto API to decrypt the data using a private key on a Smart Card.  In the
> following examples, where I say “decrypt using Smart Card,” you could just
> as easily replace this with “decrypt using out-of-band-key obtained from
> the Smart Card,” but I’m pretty sure that (by design) you cannot extract a
> private key from a Smart Card.   Of course this makes it more of a
> challenge for the API to handle Vijay’s three classifications of keys in
> similar manners.  (I hope someone can prove me wrong!)  ****
>
>  ****
>
> My doctor stores my latest medical test report on the cloud. For privacy
> and security reasons, the report is stored in an encrypted form. The
> doctor's secretary emails me, "At your leisure you can get the report at
> this URL." I open my browser and enter the URL that results in fetching the
> encrypted report, some HTML and JavaScript. The JavaScript requests the
> Crypto API to decrypt the report using my Smart Card. The API returns the
> decrypted report and the JavaScript then inserts it into the HTML, which is
> then displayed on the browser screen.****
>
>  ****
>
> The Social Security administration uploads each citizen's benefits summary
> report to the cloud. For privacy and security reasons, the reports are
> stored in an encrypted form. The Social Security administration announces
> on their web site the availability of the reports. Each citizen can then
> open a browser, go to the web site for the Social Security administration
> and log on, which takes them to their encrypted report. The encrypted
> report is fetched by the browser, along with some HTML and JavaScript.  The
> JavaScript requests the Crypto API to decrypt the report using my Smart
> Card. The API returns the decrypted report and the JavaScript then inserts
> it into the HTML, which is then displayed on the browser screen.****
>
>  ****
>
> A financial brokerage firm generates and stores quarterly reports for each
> of its members to the cloud. For privacy and security reasons, the reports
> are stored in an encrypted form. Each member can then open a browser, go to
> the web site for the financial brokerage firm and log on, which takes them
> to their encrypted quarterly report. The encrypted report is fetched by the
> browser, along with some HTML and JavaScript.  The JavaScript requests the
> Crypto API to decrypt the report using my Smart Card. The API returns the
> decrypted report and the JavaScript then inserts it into the HTML, which is
> then displayed on the browser screen.****
>
>  ****
>
> A National Identity Smart Card contains a person's private key. A National
> Registry contains each person's public key. The doctor, Social Security
> administration, and financial brokerage firm obtains the appropriate public
> keys from the National Registry.****
>
>  ****
>
> *From:* Vijay Bharadwaj [mailto:Vijay.Bharadwaj@microsoft.com]
> *Sent:* Tuesday, June 12, 2012 4:07 AM
> *To:* public-webcrypto@w3.org
> *Subject:* Use case classification, and associated security models****
>
>  ****
>
> (apologies in advance for the long email)****
>
>  ****
>
> As I mentioned during the conference call earlier today, I’ve been
> thinking about the various use cases proposed so far from the viewpoint of
> key management. It seems to me that these break down into three basic cases
> that a Web crypto API must support, each with subtle differences in the
> trust model.****
>
>  ****
>
> Scenario 1: Ephemeral or local-only keys****
>
>  ****
>
> Some scenarios involve only keys that are generated in the browser by
> Javascript, and only ever used inside that browser (either within the same
> session or persisted across sessions). The obvious example is encryption of
> data for local storage or temporary encryption of in-memory data. The
> identifying feature of this type of scenario is that the key is only ever
> used by the app that generates it.****
>
>  ****
>
> The security model here seems to be that the web app believes its
> environment to be honest-but-curious or honest-but-coercible and so is
> trying to mitigate that by adding a layer of security through crypto. For
> example, if an app trusted the host OS to safeguard a particular piece of
> stored data, there would be no need for the app to encrypt it (it could
> just trust the OS to do so if necessary). At the same time, the app trusts
> the host OS to have some modicum of honesty (otherwise encryption is
> useless; the host could just steal the data anyways).****
>
>  ****
>
> Scenario 2: Ephemeral keys obtained through key agreement****
>
>  ****
>
> Another scenario is where keys are obtained through key exchange or key
> transport in the app. For instance, consider the use cases where Alice and
> Bob are trying to converse through an intermediary Carol (who runs the web
> service brokering the conversation). They would set up some kind of key
> agreement and then use the agreed key to encrypt bulk traffic. The key
> exchange may be bootstrapped by some other long-lived key (see scenario 3)
> or brokered by the service.****
>
>  ****
>
> Here the security model seems to be that the web app trusts the host
> environment, but distrusts some remote party (i.e. Carol). It is therefore
> using the web crypto API to fill a need that transport-level security does
> not.****
>
>  ****
>
> In some ways this is similar to Scenario 1 (e.g. local encryption of
> persisted data is essentially a protocol where current-you is sending data
> to future-you) but I’m calling it out as separate due to the difference in
> security models.****
>
>  ****
>
> Scenario 3: Long-lived keys obtained out-of-band****
>
>  ****
>
> This covers all the smart card scenarios and other things like credit
> cards and national IDs. In fact, pretty much anything involving signature
> or non-repudiation would seem to need this. The human user has a long-lived
> credential (in the form of a key) that was issued by the service (or
> someone trusted by the service). The service wants the user to use this key
> to authenticate and/or encrypt data to provide some assurance against
> untrusted entities between the user and the service (both the user’s client
> environment and any intervening network entities). In this particular case,
> the service needs a way to tell the user agent which keys are acceptable,
> and therefore some sort of key selection method is needed. For asymmetric
> keys, basing the selection on certificates seems reasonable. For symmetric
> keys, this is harder - some sort of key ID scheme may be reasonable. In
> either case, the underlying OS is responsible for locating the key
> container and the crypto module or provider it’s in. This module or
> provider need not be exposed to the web app at all, though the service may
> well make some assumptions about its behavior.****
>
>  ****
>
> In this case the security model is different from the other cases. Here
> the key container / secure element is the thing that is trusted. The
> assumption is that the key provisioning process makes it so that only
> secure elements can contain keys matching the service’s selection criteria.
> On the other hand, the web app and its environment are not necessarily
> trusted – in extreme cases, the secure element may have its own display and
> user input mechanisms to verify user consent independent of them.****
>
>  ****
>
> Use cases involving signature validation are also arguably part of this
> family, since the trust anchor (e.g. root certificate) is likely
> provisioned out-of-band as well.****
>
>  ****
>
>  ****
>
> In all the above cases, once a key is obtained, all the actual crypto
> operations are pretty much the same. So is we define all operations in the
> API which require a key such that they take a key object as a parameter,
> then the only difference between the above scenarios (from an API
> perspective) is the operations used to instantiate that key object. The
> above 3 scenarios would then correspond to 3 different instantiation
> methods for key objects:****
>
>  ****
>
> 1.      GenerateKey – create a new key for use with a specific algorithm.
> Choice of crypto provider left up to the platform.****
>
> 2.      ImportKey – take a key blob obtained from key agreement and
> create a key object from it. Choice of crypto provider left up to the
> platform.****
>
> 3.      OpenKey – Locate a key on the host system that matches a set of
> criteria. Choice of crypto provider to be made by platform depending on the
> location of the key.****
>
>  ****
>
> There is also a fourth primitive which is often used with scenario 3 –
> credential enrollment. This would be the operation where the user employs
> the trusted key to obtain a credential (e.g. enrolling for a smart card
> certificate by signing a request using one’s existing smart card key).****
>
>  ****
>
> Does that seem reasonable? Any other families of use cases that I’m
> overlooking?****
>
> ** **
>

Received on Wednesday, 13 June 2012 14:33:00 UTC