Re: Origin-bound keys

Sent from my iPhone

On Aug 9, 2012, at 7:37 PM, "Ryan Sleevi" <sleevi@google.com> wrote:

> On Thu, Aug 9, 2012 at 2:57 PM, Mark Watson <watsonm@netflix.com> wrote:
>> 
>> On Aug 9, 2012, at 11:04 AM, Ryan Sleevi wrote:
>> 
>> On Mon, Aug 6, 2012 at 12:41 PM, Mark Watson <watsonm@netflix.com> wrote:
>> 
>> 
>> On Aug 6, 2012, at 11:34 AM, Ryan Sleevi wrote:
>> 
>> 
>> While hand-waving about UI decisions (as I tend to get UI horribly
>> 
>> wrong as a general rule), it seems to me that generating
>> 
>> origin-specific-and-persistent keys is equally troubling, because it
>> 
>> encourages application providers to 'lock-in' keys to their specific
>> 
>> origin.
>> 
>> 
>> I don't really understand this concern. A key created by one origin can only
>> be used by another with the cooperation of the first. In the case of
>> symmetric keys they need to share the actual keys. In the case of key pairs
>> with certificates establishing some form of identity they need to share the
>> public key of the key pair used to sign the certificate. You can't share the
>> keys across origins without the application providers consent anyway and if
>> they give their consent then they'll use origin-authorized keys.
>> 
>> 
>> I think this point may be where some of our misunderstanding comes
>> from. I believe that "A key created by one origin can only be used by
>> another with the cooperation of the first **or the permission of the
>> user**". Note the ** portion.
>> 
>> 
>> I've a feeling we're talking at crossed purposes. Here's a specific example:
>> Netflix.com creates a symmetric key in your browser and marks it as not
>> supporting export. You - the user - would like to use that key with
>> Hulu.com.
>> 
>> But that key is completely useless to Hulu.com unless someone provides them
>> with the raw key.  You can't provide them with it (export is not supported),
>> and Netflix.com is not going to do that either.
>> 
>> Another: An application from netflix.com creates a public/private key pair
>> for your browser and arranges for netflix.com to make a certificate for you
>> for that key pair. The certificate indicates that the signing authority is
>> auth.netflix.com. You would like to use that key to log into hulu.com.
>> Again, that can only work if hulu.com can has information from netflix.com
>> (the public key of auth.netflix.com) which we do not have to give them.
> 
> This is not accurate. You absolutely could use that key with hulu.com
> and mint a hulu.com certificate. This would, in fact, be rather
> desirable both for users and for (netflix.com, hulu.com), if the user
> had generated/stored the key in an 'extra-strong' way (whether that be
> via Secure Element, via the OS store rather than a per-app text file,
> etc - up to the implementation).

Ok, I see your point for key pairs. But my point stands for symmetric keys.

> 
> There's no need at all for hulu.com to know that the public/private
> keypair was originally created for netflix.com, nor for netflix.com to
> know that the key has also been reused for hulu.com. There's no
> statement for or against exposing the netflix.com certificate to
> hulu.com - maybe it's useful to hulu.com, maybe it's not - but I'm not
> sure I see a strong case for or against, which is why I'd rather leave
> it up to implementations.
> 
> I'll readily admit that there is a risk here, in that it allows some
> degree of cross-origin spoofing. Under this shared key scenario, an
> XSS vuln on netflix.com might allow an attacker to spoof messages to
> hulu.com. There are a number of ways to mitigate this beyond the point
> of concern though (eg: the implementation requires CSP to be in
> effect, the implementation exposes TLS channel bindings to thus
> mitigate spoofing, the implementation use username+password+key, etc).
> 
>> 
>> Since actually using a key created by one origin requires the cooperation of
>> the organization owning that origin, any organization supporting such re-use
>> would create those keys as origin-authorized, not origin-specific.
>> 
>> There is no (technical) way to *force* organizations to design their
>> security such that keys/identities can be reused across origins.
>> 
>> 
>> My view is that while access to keys is limited on an origin-based
>> security model, I do not think that the origin is the sole arbiter of
>> key security or access. This is consistent with the 'native code'
>> model, in which Application 1 stores a key in the user's OS layer key
>> store, and *any application* that runs on the user's machine can later
>> access that key. In our model of a JS API, "any origin" can
>> *potentially* access the key, dependent on the implementation and how
>> it exposes the key.
>> 
>> 
>> Access to such a key is useless unless the creating origin is cooperating in
>> some way. If not there's no point in requiring that the key can be accessed
>> by others with user authorization. On the other hand there's advantage in
>> marking the key as unusable to others: fewer security dialogs, less user
>> confusion etc.
> 
> You're presuming a particular interpretation. I'm just providing
> examples. I think implementations, not site operators, should be
> making the decisions about what the appropriate user experience should
> be.

I am not disagreeing on this point.

 I'm saying that the application should be able to request certain policies from the browser, so that te browser can make better decisions as to the appropriate user experience.

. Just as we allow apps to request that the raw key not be exportable, they should be able to request that the key be origin-specific. Browsers are free to allow user override of either of these policies.

> If the implementation does things that are so user-hostile as to
> be unusable, site operators won't deploy (see, for example, TLS client
> cert UIs 10 years ago).
> That doesn't mean the spec will be a failure -
> and implementations can certainly adjust their behaviours to make the
> experience smoother.
> 
> I'm not trying to be unsympathetic here, but it feels like a large
> part of the objections are tied solely around one hypothetical, pie in
> the sky user interaction mode, from a non-UI-person. My point is to
> demonstrate the robustness of being general.
> 
> To reiterate past comments, I have no objection whatsoever to a site
> 'hinting' that a key should always be origin constrained. I have
> strong objections to *requiring* that an implementation receiving the
> hint *always and only* origin constrain.

I would see this as a request from the application for the browser to apply a specific and well-defined policy - just like requesting that the raw key not be exposed.

I'm not much bothered about the language around whether browsers should or must respect the policy: we can't compell implementations to do anything, or prevent them implementing overrides or prevent users modifying the browser to implement an override etc.
> 
>> 
>> 
>> I feel like I've given sufficient examples about why I take this view,
>> particularly the "cloud storage" example, but if it would be helpful
>> for me to spell out why I think this is necessary, I'll be glad to do
>> so. I just want to make sure we're first understanding eachothers'
>> respective positions on the key security model.
>> 
>> 
>> I'm not sure we are ;-) More examples would be good.
>> 
>> 
>> 
>> 
>> My thoughts on origin-authorized user interaction is that /any/
>> 
>> application can create an origin-authorized key without special UI.
>> 
>> 
>> I'm worried that not every implementation will agree with that (and may
>> require user interaction).
>> 
>> 
>> And this is equally true for other APIs in the Web Platform space -
>> such as getUserMedia(), navigator.geolocation.getCurrentPosition(),
>> IndexedDB quota limits, etc. As Harry mentioned during the Face to
>> Face, the W3C tries to avoid requiring or forbidding certain UI
>> interactions, so I don't know if we can realistically accomplish the
>> desired behaviour in the spec. A conformant implementation could
>> prompt for *any* access for the Web Crypto API - for example, if it
>> was subject to timing attacks that could lead to key disclosure.
>> 
>> 
>> Sure. But if we don't distinguish between origin-authorized and
>> origin-specific keys then the UI for them has to be the same. The browser
>> implementor isn't even given the choice to implement a simpler UI for the
>> origin-specific case.
>> 
>> 
>> 
>> It's only when another origin attempts to discover keys with a 'broad'
>> 
>> query (eg: rather than specific IDs, it says 'all RSA keys'), and no
>> 
>> pre-existing per-origin grants exist, that the user may be prompted to
>> 
>> select from an existing set of origin-authorized keys.
>> 
>> 
>> In our use case, we will create several "session" keys through key
>> agreement, authenticated with the pre-provisioned key. The session keys then
>> have a lifetime of some number of hours or days. These keys would be
>> meaningless to any other application and also to users. I would not want
>> users to get dialogs showing big lists of incomprehensible UUID-identified
>> keys that were intended to internal use by individual applications.
>> 
>> 
>> This is perhaps another way of making the distinction. Not all keys are
>> meaningful to other origins. Especially symmetric keys.
>> 
>> 
>> While I agree with the fundamentals, and while I'm certain Netflix's
>> use case is certainly intending to favor the user, I'm concerned about
>> sites that are less pro-user that use keys as a lock-in mechanism.
>> 
>> 
>> Applications can always use keys to lock in users, whether or not those keys
>> are visible to other origins.
>> 
>> I forget the details of your cloud-storage example, but re-constructing: is
>> your concern that a cloud storage application would store the key for the
>> user's data as origin-specific and thus prevent the user from accessing
>> their own data except through that service ? Requiring the cloud storage
>> application to reveal that key to other origins does little to simplify
>> extracting the data without the help of the service. You would need to
>> reverse-engineer the application to understand how the data was requested
>> from the service and how it was then decrypted. A user capable of this is
>> easily capable of recompiling (or just configuring) an open source browser
>> to reveal the origin-specific keys.
>> 
>> The point of the distinction is that if we have it, browsers can safely
>> choose to give ordinary users a less invasive and confusing UI for the
>> origin-specific keys.
> 
> The user does not have to do this reverse engineering - the
> *alternative* service provider does.
> 
> And I think the history of "web mashups" shows that such cases happen
> regularly, regardless of whether the "first" service provider offers a
> formal API or not. While I'm sure from the point of view of "service
> providers" that such mash-ups may not be endorsed or desired, I think
> it's clear from history that they can work in favor of users and stir
> up real innovation.
> 
>> 
>> 
>> 
>> 
>> 
>> I
>> believe the, at the end of the day, the user must remain in control of
>> the security and keying material on their system.
>> 
>> I think we'll continue to have the discussion about the 'right way' to
>> use the API, but I would expect that the only time dialogs or user
>> interaction even remotely come into consideration is when an
>> application gives a 'broad' KeyQueryList - that is, one which does not
>> try to specifically identify 'known' keys, and which it does not
>> indicate it's looking for keys it has already been granted access to.
>> 
>> 
>> I'm not a privacy expect, but one concern is whether user interaction is
>> required to create keys which may later be visible to other origins.
>> 
>> I can see an argument that this is required, so that later when the user is
>> asked to grant access to those keys they may have some memory or when or why
>> they were created.
>> 
>> Keys that are visible to other origins raise tracking concerns that don't
>> exist for origin-specific keys. Actually I'd originally assumed that all
>> keys would be origin-specific for this reason (no more dangerous than
>> cookies).
> 
> Keys that are only visible to other origins does not, in my opinion,
> raise tracking concerns. The whole point is that user consent has been
> granted.

We should run this by the tracking experts. It's not black-and-white due to the frequency with which users click through any and all dialogs.
> 
> The normal response to "If tracking concerns, then user consent
> required". Your argument seems to be "If user consent granted, then
> tracking concerns". I'm not sure that's accurate (either my
> understanding or that argument)
> 
>> 
>> Then, it's up to the implementation to decide how to react
>> accordingly; including a perfectly appropriate response which is to
>> say "no such keys exist" and never show any UI. That's up to the
>> implementation though.
>> 
>> 
>> This is, in
>> 
>> effect, the SSL/TLS client certificate security model, but applied to
>> 
>> keys rather than certificates. This seems to mesh particularly well
>> 
>> with the Secondary API features, which is why I'm so fond of it.
>> 
>> 
>> Though it's probably an unrealistic security posture, my gut is that
>> 
>> the decision about whether keys are origin-specific or
>> 
>> origin-authorized is a decision better left up to the user (or
>> 
>> further, not even specified), since it affects both their privacy,
>> 
>> potentially browser flexibility (since it now requires a formal
>> 
>> specification of the stable key storage), and their flexibility to
>> 
>> choose between web application providers. The question is: Are there
>> 
>> situations where the application may wish origin-specific, but the
>> 
>> user wishes origin-authorized? I think so.
>> 
>> 
>> Can you give an example ?
>> 
>> 
>> I think it reduces flexibility if the information about the intended use of
>> the key (only ever with one origin, or possibly with other origins) is not
>> available to the browser. We miss opportunities for improved user experience
>> because all keys have to be treated like they will be used elsewhere when
>> most of the time that will not be the case.
>> 
>> 
>> Consider Applications 1 and 2, both of which wish to use
>> 'origin-specific' RSA keypairs as part of the authentication workflow.
>> During the registration/enrollment phase, both applications have
>> workflows similar to the following:
>> 
>> 1) Generate an RSA key pair of size X
>> 2) Export the public key and send to the server
>> 3) Associate the (username, public key) on the server side
>> 4) For future authentication attempts, the user must perform some
>> signing operation with the private key
>> 
>> This is a rather simple, easy to understand auth flow.
>> 
>> One scenario would be that each origin generates an origin-specific
>> key pair (eg: they have no relationship to each other). That's fine,
>> and if that's all an implementation supported, the scenario fully
>> works as expected for both applications.
>> 
>> However, imagine the user already had an RSA public/private key pair
>> through some other means. It may even be stored in a
>> higher-than-normal security layer (stored in the OS with a
>> prompt-on-use semantic, stored on a secure element, stored on some
>> other device, etc). Rather than generating two origin-specific
>> keypairs, in addition to the one they already have, the user wishes to
>> *reuse* the existing key pair for both sites.
>> 
>> Yes, this allows Application 1 and 2 to collude and determine that
>> user-Application-1 is probably user-Application-2, since they both
>> share the public key, but that's done at the user's choice. (As it
>> stands, the applications can probably already collude based on either
>> username+password+ip matching, or by simply looking at the user's
>> email address, but that's a different story). In this case, even
>> though both origins requested origin-specific keys, the user provides
>> an origin-authorized key.
>> 
>> That may seem like a contrived example ("The user already had the key,
>> therefore it's different") - but I think it's also possible that this
>> high-security key was generated *by the user* when some OTHER
>> application (Application 3) was requesting a key - perhaps even an
>> origin-specific one!. At the time of generation, the user configured
>> all of these high-security options, which is how that key came into
>> being.
>> 
>> While I can certainly understand dialog fatigue, and again, I'm making
>> no commitment one way or the other with regards to UI, I'm trying to
>> leave this API sufficiently flexible that such implementation-specific
>> behaviours or features are not forbidden or impossible.
>> 
>> 
>> What you describe is a pretty sophisticated user who (presumably)
>> understands or has been warned of the dangers of sharing keys this way.
>> 
>> I wonder if we can come up with some other definition of origin-specific
>> that meets both our requirements ? There is still a significant difference
>> in my mind between the case where the application does not intend the key to
>> be re-used by other origins (I would say this is the more common case), and
>> the case where it is really the intention that the key be re-used across
>> origins (because it's tied to some federated identity scheme that allows the
>> user to prove some form of identity to multiple applications).
>> 
>> 
>> 
>> 
>> Where I'm having trouble with is understanding if there are particular
>> 
>> _security benefits_ from making the distinction, or if the concern is
>> 
>> primarily related to UI? If the latter, then couldn't origin-specific
>> 
>> pre-provisioned keys be handled by implementations that support such
>> 
>> keys? eg: in the Netflix case, couldn't the Netflix User Agent simply
>> 
>> know that certain keys are origin-specific, and only expose them for
>> 
>> particular origins, and to do so without prompting? This avoids having
>> 
>> to spec the distinction, and leaves implementations the flexibility to
>> 
>> implement according to their underlying key storage mechanisms.
>> 
>> 
>> 
>> For the pre-provisioned keys, yes: we can just say that these must only be
>> exposed to the netflix.com origin.
>> 
>> 
>> But we'd like the generated session keys to have the same property and for
>> user to suffer no more from their existence (in terms of privacy, control
>> and confusing UIs) than they do today from site-specific cookies).
>> 
>> 
>> ...Mark
>> 
>> 
>> Users and user agents can already configure prompting for
>> site-specific cookies. Thus I think the point of 'like cookies'
>> highlights my concern - that implementations, not applications, should
>> be in charge of security relevant decisions.
>> 
>> 
>> Yes, for the 99.9% of users, they never encounter these prompts.
>> That's equally my goal with this API. But I do not think preventing an
>> implementation, or a user, from implementing prompting and/or key
>> migration, is something this spec should do.
>> 
>> 
>> A specification cannot prevent implementors from providing whatever
>> capabilities they like.
> 
> I think we might have some spirited disagreement here. A spec can
> require behaviour X, which may be mutually exclusive with behaviour Y,
> effectively preventing Y. An implementation could certainly implement
> Y, but then it's not implementing the spec, and sure, then we have a
> problem.

Why? I see no problem with implementations choosing to provide user overrides irrespective of what the spec says.

> 
> I'd rather leave either X or Y possible - especially as it allows for
> method Z to be discovered as the truly better solution once
> implementors have years of experience with this API and better
> understand the risks.
> 
>> 
>> I'm asking for the possibility for the application to provide more
>> information to the browser about the properties the application would like
>> the key to have. Giving more information can only increase the options that
>> browser implementors have.
>> 
>> For our specific use-case, we need both the pre-provisioned keys and
>> temporary keys to have the same non-migratability properties. Of course we
>> can't force anyone to implement these, but our application won't work, or
>> will work differently, on browsers that don't provide this. It may even be
>> that all desktop browsers enable migration of all kinds of keys. This is
>> fine, but we should also support (non-desktop?) browsers that choose to do
>> things differently.
>> 
>> ...Mark
> 
> I'm not sure what language change is required for an implementation
> that doesn't wish to implement migratability. At present, the language
> is intended to be broad enough to allow either/or - or even shades of
> this - at the implementations discretion. I don't particularly favor
> one or the other - and I'd reasonably expect that we'd implement some
> shade-of scheme anyways.
> 
For our application to work (in its best form) the pre-provisioned keys and the session keys need to be non-migratable. An implementation that supports that may also wish to support other applications that allow their keys to be used by other origins. That implementation needs to know which policy the app prefers.

...Mark

Received on Friday, 10 August 2012 03:40:36 UTC