Re: Origin-bound keys

On Aug 9, 2012, at 11:04 AM, Ryan Sleevi wrote:

On Mon, Aug 6, 2012 at 12:41 PM, Mark Watson <<>> wrote:

On Aug 6, 2012, at 11:34 AM, Ryan Sleevi wrote:

While hand-waving about UI decisions (as I tend to get UI horribly
wrong as a general rule), it seems to me that generating
origin-specific-and-persistent keys is equally troubling, because it
encourages application providers to 'lock-in' keys to their specific

I don't really understand this concern. A key created by one origin can only be used by another with the cooperation of the first. In the case of symmetric keys they need to share the actual keys. In the case of key pairs with certificates establishing some form of identity they need to share the public key of the key pair used to sign the certificate. You can't share the keys across origins without the application providers consent anyway and if they give their consent then they'll use origin-authorized keys.

I think this point may be where some of our misunderstanding comes
from. I believe that "A key created by one origin can only be used by
another with the cooperation of the first **or the permission of the
user**". Note the ** portion.

I've a feeling we're talking at crossed purposes. Here's a specific example:<> creates a symmetric key in your browser and marks it as not supporting export. You - the user - would like to use that key with<>.

But that key is completely useless to<> unless someone provides them with the raw key.  You can't provide them with it (export is not supported), and<> is not going to do that either.

Another: An application from<> creates a public/private key pair for your browser and arranges for<> to make a certificate for you for that key pair. The certificate indicates that the signing authority is<>. You would like to use that key to log into<>. Again, that can only work if<> can has information from<> (the public key of<>) which we do not have to give them.

Since actually using a key created by one origin requires the cooperation of the organization owning that origin, any organization supporting such re-use would create those keys as origin-authorized, not origin-specific.

There is no (technical) way to *force* organizations to design their security such that keys/identities can be reused across origins.

My view is that while access to keys is limited on an origin-based
security model, I do not think that the origin is the sole arbiter of
key security or access. This is consistent with the 'native code'
model, in which Application 1 stores a key in the user's OS layer key
store, and *any application* that runs on the user's machine can later
access that key. In our model of a JS API, "any origin" can
*potentially* access the key, dependent on the implementation and how
it exposes the key.

Access to such a key is useless unless the creating origin is cooperating in some way. If not there's no point in requiring that the key can be accessed by others with user authorization. On the other hand there's advantage in marking the key as unusable to others: fewer security dialogs, less user confusion etc.

I feel like I've given sufficient examples about why I take this view,
particularly the "cloud storage" example, but if it would be helpful
for me to spell out why I think this is necessary, I'll be glad to do
so. I just want to make sure we're first understanding eachothers'
respective positions on the key security model.

I'm not sure we are ;-) More examples would be good.

My thoughts on origin-authorized user interaction is that /any/
application can create an origin-authorized key without special UI.

I'm worried that not every implementation will agree with that (and may require user interaction).

And this is equally true for other APIs in the Web Platform space -
such as getUserMedia(), navigator.geolocation.getCurrentPosition(),
IndexedDB quota limits, etc. As Harry mentioned during the Face to
Face, the W3C tries to avoid requiring or forbidding certain UI
interactions, so I don't know if we can realistically accomplish the
desired behaviour in the spec. A conformant implementation could
prompt for *any* access for the Web Crypto API - for example, if it
was subject to timing attacks that could lead to key disclosure.

Sure. But if we don't distinguish between origin-authorized and origin-specific keys then the UI for them has to be the same. The browser implementor isn't even given the choice to implement a simpler UI for the origin-specific case.

It's only when another origin attempts to discover keys with a 'broad'
query (eg: rather than specific IDs, it says 'all RSA keys'), and no
pre-existing per-origin grants exist, that the user may be prompted to
select from an existing set of origin-authorized keys.

In our use case, we will create several "session" keys through key agreement, authenticated with the pre-provisioned key. The session keys then have a lifetime of some number of hours or days. These keys would be meaningless to any other application and also to users. I would not want users to get dialogs showing big lists of incomprehensible UUID-identified keys that were intended to internal use by individual applications.

This is perhaps another way of making the distinction. Not all keys are meaningful to other origins. Especially symmetric keys.

While I agree with the fundamentals, and while I'm certain Netflix's
use case is certainly intending to favor the user, I'm concerned about
sites that are less pro-user that use keys as a lock-in mechanism.

Applications can always use keys to lock in users, whether or not those keys are visible to other origins.

I forget the details of your cloud-storage example, but re-constructing: is your concern that a cloud storage application would store the key for the user's data as origin-specific and thus prevent the user from accessing their own data except through that service ? Requiring the cloud storage application to reveal that key to other origins does little to simplify extracting the data without the help of the service. You would need to reverse-engineer the application to understand how the data was requested from the service and how it was then decrypted. A user capable of this is easily capable of recompiling (or just configuring) an open source browser to reveal the origin-specific keys.

The point of the distinction is that if we have it, browsers can safely choose to give ordinary users a less invasive and confusing UI for the origin-specific keys.

believe the, at the end of the day, the user must remain in control of
the security and keying material on their system.

I think we'll continue to have the discussion about the 'right way' to
use the API, but I would expect that the only time dialogs or user
interaction even remotely come into consideration is when an
application gives a 'broad' KeyQueryList - that is, one which does not
try to specifically identify 'known' keys, and which it does not
indicate it's looking for keys it has already been granted access to.

I'm not a privacy expect, but one concern is whether user interaction is required to create keys which may later be visible to other origins.

I can see an argument that this is required, so that later when the user is asked to grant access to those keys they may have some memory or when or why they were created.

Keys that are visible to other origins raise tracking concerns that don't exist for origin-specific keys. Actually I'd originally assumed that all keys would be origin-specific for this reason (no more dangerous than cookies).

Then, it's up to the implementation to decide how to react
accordingly; including a perfectly appropriate response which is to
say "no such keys exist" and never show any UI. That's up to the
implementation though.

This is, in
effect, the SSL/TLS client certificate security model, but applied to
keys rather than certificates. This seems to mesh particularly well
with the Secondary API features, which is why I'm so fond of it.

Though it's probably an unrealistic security posture, my gut is that
the decision about whether keys are origin-specific or
origin-authorized is a decision better left up to the user (or
further, not even specified), since it affects both their privacy,
potentially browser flexibility (since it now requires a formal
specification of the stable key storage), and their flexibility to
choose between web application providers. The question is: Are there
situations where the application may wish origin-specific, but the
user wishes origin-authorized? I think so.

Can you give an example ?

I think it reduces flexibility if the information about the intended use of the key (only ever with one origin, or possibly with other origins) is not available to the browser. We miss opportunities for improved user experience because all keys have to be treated like they will be used elsewhere when most of the time that will not be the case.

Consider Applications 1 and 2, both of which wish to use
'origin-specific' RSA keypairs as part of the authentication workflow.
During the registration/enrollment phase, both applications have
workflows similar to the following:

1) Generate an RSA key pair of size X
2) Export the public key and send to the server
3) Associate the (username, public key) on the server side
4) For future authentication attempts, the user must perform some
signing operation with the private key

This is a rather simple, easy to understand auth flow.

One scenario would be that each origin generates an origin-specific
key pair (eg: they have no relationship to each other). That's fine,
and if that's all an implementation supported, the scenario fully
works as expected for both applications.

However, imagine the user already had an RSA public/private key pair
through some other means. It may even be stored in a
higher-than-normal security layer (stored in the OS with a
prompt-on-use semantic, stored on a secure element, stored on some
other device, etc). Rather than generating two origin-specific
keypairs, in addition to the one they already have, the user wishes to
*reuse* the existing key pair for both sites.

Yes, this allows Application 1 and 2 to collude and determine that
user-Application-1 is probably user-Application-2, since they both
share the public key, but that's done at the user's choice. (As it
stands, the applications can probably already collude based on either
username+password+ip matching, or by simply looking at the user's
email address, but that's a different story). In this case, even
though both origins requested origin-specific keys, the user provides
an origin-authorized key.

That may seem like a contrived example ("The user already had the key,
therefore it's different") - but I think it's also possible that this
high-security key was generated *by the user* when some OTHER
application (Application 3) was requesting a key - perhaps even an
origin-specific one!. At the time of generation, the user configured
all of these high-security options, which is how that key came into

While I can certainly understand dialog fatigue, and again, I'm making
no commitment one way or the other with regards to UI, I'm trying to
leave this API sufficiently flexible that such implementation-specific
behaviours or features are not forbidden or impossible.

What you describe is a pretty sophisticated user who (presumably) understands or has been warned of the dangers of sharing keys this way.

I wonder if we can come up with some other definition of origin-specific that meets both our requirements ? There is still a significant difference in my mind between the case where the application does not intend the key to be re-used by other origins (I would say this is the more common case), and the case where it is really the intention that the key be re-used across origins (because it's tied to some federated identity scheme that allows the user to prove some form of identity to multiple applications).

Where I'm having trouble with is understanding if there are particular
_security benefits_ from making the distinction, or if the concern is
primarily related to UI? If the latter, then couldn't origin-specific
pre-provisioned keys be handled by implementations that support such
keys? eg: in the Netflix case, couldn't the Netflix User Agent simply
know that certain keys are origin-specific, and only expose them for
particular origins, and to do so without prompting? This avoids having
to spec the distinction, and leaves implementations the flexibility to
implement according to their underlying key storage mechanisms.

For the pre-provisioned keys, yes: we can just say that these must only be exposed to the<> origin.

But we'd like the generated session keys to have the same property and for user to suffer no more from their existence (in terms of privacy, control and confusing UIs) than they do today from site-specific cookies).


Users and user agents can already configure prompting for
site-specific cookies. Thus I think the point of 'like cookies'
highlights my concern - that implementations, not applications, should
be in charge of security relevant decisions.

Yes, for the 99.9% of users, they never encounter these prompts.
That's equally my goal with this API. But I do not think preventing an
implementation, or a user, from implementing prompting and/or key
migration, is something this spec should do.

A specification cannot prevent implementors from providing whatever capabilities they like.

I'm asking for the possibility for the application to provide more information to the browser about the properties the application would like the key to have. Giving more information can only increase the options that browser implementors have.

For our specific use-case, we need both the pre-provisioned keys and temporary keys to have the same non-migratability properties. Of course we can't force anyone to implement these, but our application won't work, or will work differently, on browsers that don't provide this. It may even be that all desktop browsers enable migration of all kinds of keys. This is fine, but we should also support (non-desktop?) browsers that choose to do things differently.


Received on Thursday, 9 August 2012 21:57:31 UTC