W3C home > Mailing lists > Public > public-webcrypto@w3.org > August 2012

Re: ACTION-22: Key export

From: Mitch Zollinger <mzollinger@netflix.com>
Date: Sun, 26 Aug 2012 17:06:51 +0900
Message-ID: <5039D91B.7040204@netflix.com>
To: Ryan Sleevi <sleevi@google.com>
CC: <public-webcrypto@w3.org>
Ryan,

First off, thank you. Things are beginning to make more sense because of 
your detailed response. More inline below...

On 8/25/12 1:10 PM, Ryan Sleevi wrote:
> On Fri, Aug 24, 2012 at 7:26 PM, Mitch Zollinger <mzollinger@netflix.com> wrote:
>> On 8/23/12 3:44 AM, Ryan Sleevi wrote:
>>> On Wed, Aug 22, 2012 at 1:23 AM, Mitch Zollinger <mzollinger@netflix.com>
>>> wrote:
>>>> Sorry for the slow response on this issue. I'm currently on business
>>>> travel
>>>> in Asia.
>>>>
>>>> Responses below.
>>>>
>>>>
>>>> On 8/17/12 3:50 AM, Vijay Bharadwaj wrote:
>>>>
>>>> No, I was thinking of authorization and export as separable issues.
>>>>
>>>>
>>>>
>>>> -          You can only see keys that you are authorized for.
>>>>
>>>> -          Of the keys you can see, you can potentially export the
>>>> exportable ones without UI.
>>>>
>>>>
>>>> Ok. This matches my understanding.
>>>>
>>>>
>>>>
>>>>
>>>> Regarding the DH+KDC model, I wonder if this isnít getting real close to
>>>> a
>>>> high-level API. It feels like itís a hop and a skip away from a generic
>>>> box/unbox API. This is not to say that it isnít a worthy goal, just that
>>>> it
>>>> may be hard to generalize to a low-level API.
>>>>
>>>>
>>>> When I first read your response, my initial reaction was actually to
>>>> agree
>>>> with this. If I assume that there is no possibility of protected key
>>>> exchange + key derivation, then I know that if my JS app is compromised,
>>>> the
>>>> session key is exposed. But if a security attack happens (and let's
>>>> assume I
>>>> hear about it!), I *could* actually fall back to using pre-shared keys to
>>>> "recover" my secure session with the device.
>>>>
>>>> But, after a bit more thought, I actually believe that there is a reason
>>>> to
>>>> push for protected key exchange / derivation based on:
>>>> * We have explicitly stated that we want key protection; that is, in some
>>>> instances of keys, the JS is not allowed access to the keying material.
>>>> * We have explicitly stated that we want session keys as the product of
>>>> some
>>>> key exchange mechanism.
>>>> * Taken together, there's no way to satisfy the above two goals without a
>>>> protected key exchange / key derivation, correct?
>>> Probably not as you'd like, but the existing spec does cover this, so
>>> I think it'd be helpful if it was understood why this would be
>>> unacceptable.
>>>
>>> As discussed on the phone call, you could generate a DH key pair with
>>> a KeyGenerator (with the DH private key/value that is "opaque" to
>>> content script, but not necessarily the U-A)
>>
>> Do you have actual pseudo-code you can share? I'm not seeing this as obvious
>> from the spec. Unfortunately, I haven't been able to attend the calls while
>> in Asia / won't be able to attend the calls until I'm back from Asia in a
>> week or so.
>>
>>
>>>    and derive a shared
>>> secret key with a KeyDeriver (passing the peer's DH public value). If
>>> there was a need for more rounds, you could use multiple KeyDerivers,
>>> passing in different Key objects as appropriate. For example, an RFC
>>> 2631 scheme that has a single round to expand the DH shared secret
>>> into an appropriate symmetric key, or an RFC 6189 style exchange that
>>> derived expanded ZRTP Confirm1/Confirm2 into an SRTP session key.
>>
>> Again, some example pseudo-code of what you're proposing would help greatly.
> Ok, here's a rather complete-yet-still-pseudo code of performing X9.42
> / RFC 2631 key agreement. I did X9.42 rather than ZRTP, as ZRTP has
> other parameters (eg: commit hashes) that just add code, but don't
> actually demonstrate the code concept. X9.42 / RFC 2631 takes an
> agreed upon shared secret (the result of DH Phase 2) and expands that
> into keying material suitable for the underlying algorithm.
>
> If anything, the immense complexity of this highlights why I think a
> synchronous-and-worker-only API "MIGHT" be a simpler API...
>
> // Using Diffie-Hellman
> // Obtain DH parameters (eg: from server certificate, from protocol
> exchange, from NIST well-known params)
> var prime = ...;
> var generator = ...;
>
> // Handles completion of the X9.42 expansion/agreement
> function onX942DeriveKeyComplete(keyDeriver) {
>    var finalKey = keyDeriver.result;
>    // You now have a Key object (possibly fully opaque/non-exportable) that
>    // contains the X9.42-agreed key.
> };
>
> // Handles completion of Phase 2 of DH agreement
> function onDHDeriveKeyComplete(keyDeriver) {
>    // zz is the result of the Phase 2 of PKCS #3 and is equivalent to ZZ
>    // as documented in X9.42 - aka the shared secret
>    // ZZ = g ^ (xb * xa) mod p
>    var zz = keyDeriver.result;

Is zz the actual shared secret, or is it an opaque handle at this point?

>
>    // Now expand zz according to X9.42/RFC 2631. Completely hypothetical
>    // specification of params here.
>    var otherInfo = {
>      'keyInfo': { 'algorithm': { 'name': 'AES-CBC', 'params': {
> 'length': '128' } } },
>    };
>
>    var x942 = window.crypto.deriveKey({ 'name': 'X9.42', 'params':
> otherInfo}, zz);
>    x942.addEventListener('oncomplete', onX942DeriveKeyComplete);
>    x942.derive();
> };
>
> function onExportComplete(keyExporter) {
>    // Obtain our public value (eg: an ArrayBuffer)
>    var ourPublic = keyExporter.result;
>
>    // Get peer public value (eg: from certificate, from protocol
> exchange, from pre-provisioned key)
>    var peerPublic = ...;
>
>    // Send peer the generated public value
>
>    var dhDerive = window.crypto.deriveKey({ 'name': 'DH', 'params': {
> 'public': peerPublic } }, 'key': generatedKey);
>    dhDerive.addEventListener('oncomplete', onDeriveKeyComplete);
>    dhDerive.derive();
> };
>
> // Handles completion of Phase 1, aka key generation
> function onGenerateKeyComplete(keyGenerator) {
>    // TODO - asymmetric key pair generation needs to indicate that
> Result is a Pair of Keys, not a single Key.
>    var generatedKey = keyGenerator.result.publicKey;
>
>    var keyExporter = window.crypto.exportKey('json', generatedKey);
>    keyExporter.addEventListener('oncomplete', onExportComplete);
>    keyExporter.export();
> }
>
> var dhKeygen = window.crypto.generateKey({ 'name': 'DH', 'params': {
> 'prime': prime, 'generator': generator} });
> dhKeygen.addEventListener('oncomplete', onGenerateKeyComplete);
>
>
>>
>>>> What if we were to simplify the last proposed mechanism to this instead:
>>>>
>>>> ProtectedKeyExchange kex = ProtectedKeyExchange(/*algorithm*/"foo");
>>>> kex.init(/*algorithm specific params*/);
>>>> while(! kex.done()) {
>>>>     Uint8Array client_data = kex.getNext();
>>>>     /* ...send client_data to server... */
>>>>     /* ...get server_data from server... */
>>>>     kex.step(server_data);
>>>> }
>>>> /* get a handle to the protected key that was exchanged */
>>>> Key key = kex.getKey();
>>>>
>>>> ?
>>>>
>>>> The net effect is that we don't have to declare any sort of first class
>>>> key
>>>> exchange + key derivation concepts at the WebCrypto API level; we simply
>>>> exchange keys where the end result is a handle to a non-exportable key.
>>>>
>>>> This "ProtectedKeyExchange" could simply be thought of as short-hand for
>>>> something like:
>>>>
>>>> KeyExchange kex = KeyExchange(/*algorithm*/"foo", /*exportable*/false);
>>>>
>>>> The difference being that we may want to specify ProtectedKeyExchange as
>>>> a
>>>> simplification. (What if exportable is "true"? Also, the algorithm names
>>>> for
>>>> ProtectedKeyExchange will be different from KeyExchange.) But, that's
>>>> really
>>>> more of a style question.
>>>>
>>>> Would the above better meet the goal of avoiding a "high level API"?
>>>>
>>>>
>>>> Mitch
>>> - If the only purpose of "ProtectedKeyExchange" is to imply a default
>>> value for KeyExchange (/*extractable*/ false), then that's a
>>> non-starter. The implementation overhead of bindings for that,
>>> especially for what is nominally just a parameter value, would not be
>>> acceptable.
>>
>> There is more to it than that: the key exchange under the covers is
>> functionally / conceptually distinct from the generic KeyExchange( bool
>> extractable ) case.
>>
>> By way of example, let's say that I'm doing a "signed DH exchange" (yes, I'm
>> inventing an implementation for the example and trying to inform the API
>> based on this synthetic protected exchange, but this is something similar to
>> what we have now & will need in WebCrypto) for the ProtectedKeyExchange, but
>> just a generic DH exchange for KeyExchange. In this example, I cannot create
>> a KeyExchange object using the "Signed DH" algorithm; that can only be
>> created for a ProtectedKeyExchange. The other possibility is to allow
>> "Signed DH" key exchange algorithm only when the "exportable" boolean is set
>> to false. I would be ok with that, but stylistically it feels less correct.
> The latter is, I believe, more consistent with existing APIs, but I
> can understand the "code smell".
>
> Understandably, we're in a place where we're no longer talking about
> existing, standards-oriented algorithms, but things that are almost
> certainly going in to the realm of algorithms that are specific to
> certain user agents or implementations. Unless, of course, you simply
> mean that "ProtectedKeyExchange" means no intermediate key material is
> available to the content script - at which point, I think we've
> already got that sufficiently covered.

That's what I meant. The above proposal was one way to skin that cat, 
where I was going for simplicity and extensibility.

>
> Again, It seems that ProtectedKeyExchange's purpose is purely to serve
> as syntactic sugar for a functionality boundary / default parameter.
> If I understand your proposal correctly, ProtectedKeyExchange simply
> means the client application will never be able to access the keying
> material that results. Is that correct?

Yes.

> Am I also understanding that
> this is being proposed as an OPTIONAL / MAY (not even a normative RFC
> SHOULD) - eg: not all user agents need to support
> ProtectedKeyExchange?

I don't want to add the ProtectedKeyExchange if we can meet the intended 
goals of a multi-step key exchange / derivation where no keying material 
created during the different phases is visible to the script code at any 
time.

>
> My quick take on this, admittedly having not fully considered the
> ramifications, is that it is reasonable to have certain algorithms /
> operations that do not support exporting the (generated, derived,
> imported) keys, that result from the existing interfaces. Further, if
> we are to ever consider the possibility of secure elements, then
> regardless of how the user agent provides interaction with them, it's
> quite likely that a key may be generated on such a secure element that
> is immediately and perpetually unexportable. Unlike how I understand
> your PKE scenario (in which the web application is explicitly
> requesting PKE), this would be a situation where it's perhaps the
> result of a user decision (or by the user agent), and thus the web
> application is completely unaware of the fact that a PKE has happened.
>
> Normatively, this would be something more spec-y, but would say:
> If the user requests an operation+algorithm with exportable=true, and
> that request cannot be fulfilled, the operation should fail and the
> onerror event should be called.
>
> Thus, you could have your custom algorithm, such as
> NetflixProtectedKeyExchangePhase1
> NetflixProtectedKeyExchangePhase2
> NetflixProtectedKeyExchangePhase3
>
> Where each phase had some Result object (including possibly an opaque
> series of bytes / ArrayBufferView, if that's how you wished to spec
> it) that had to be passed to the peer before the next phase.
>
> Or you could have a single algorithm, which just differentiated based
> on the Params dict - eg:
>
> enum NetflixPhase {
>    "MEK-exchange",
>    "KEK-exchange",
>    "KEK-unwrap",
>    "KEK-secure-proof",
> };
>
> dictionary NetflixProtectedKeyExchangeParams : AlgorithmParams {
>    NetflixPhase currentPhase;
>    ArrayBufferView opaqueBytes;
> };
>
> Where you just repeatedly created KeyDeriver objects for each of the
> phases, as appropriate, until you ended up with the "final" Key.

I believe I understand what you're proposing & the code would mimic very 
closely what you described above with X9.42 / RFC 2631 key agreement.

> The end result is that such a ProtectedKeyExchange is specific to your
> user-agent, and thus can be specified accordingly.
>
> If you wanted to standardize some sort of generic exchange for working
> with say, devices which have some embedded ID (eg: the TVs mentioned
> previously), you could absolutely do so - whether it be through
> contracts with vendors or through convincing the W3C that there is
> interest in doing so and that it's relevant and consistent with the
> W3C's goals.
>
> I'm by no means trying to take any sort of pejorative stance on such
> approaches, but what I am trying to highlight is an opinion that I
> believe such schemes are inherently specific to a certain use case,
> and thus not within the core realm of "Something everyone would need
> to do anything useful", which is sort of where I'd like to keep this
> API for now in order to be able to make any forward progress on it.
>
> It's tempting to try and describe every possible algorithm, but then I
> fear we'd end up with a document the size of PKCS#11 - hundreds of
> pages trying to describe the exact behaviours of every algorithm, and
> even IT left things underspecified.

Clearly that's not our intent.

>>
>>> - I'm not sure how a "KeyExchange" interface avoids having to declare
>>> a first class key exchange. It seems the very presence of an interface
>>> for key exchange, as proposed, inherently makes it 'first class'. Are
>>> you saying "You avoid having to specify the algorithm?" If so, I think
>>> that would also be a non-starter, as it's inherently not implementable
>>> by open, standards compliant browsers. If you do specify the
>>> algorithm, then I'm confident it's something we could express as a
>>> series of primitive steps without having to resort to opaque "here's
>>> some blob" exchanges.
>>
>> I believe that we can create a generic "low level" API that is algorithm
>> agnostic. Further, I believe that this should be the approach because it
>> allows implementers the freedom to do what they need to do & be creative ;)
> My concern is that I think you can already accomplish this, without
> requiring specific API additions. My further concern is that the API
> extensions proposed so far are, I believe, specific to your problem,
> and thus not generic enough to be useful.
>
> I can appreciate a concern that the API makes a certain use case hard
> - a valid concern that should absolutely be considered. But I'm also
> concerned about proposing APIs that are specific to a particular need
> that are not, perhaps, generic enough.
>
>> Locking this down to only Diffie-Hellman I believe is incorrect.
>> Diffie-Hellman is great, we use it all the time, but we shouldn't disallow
>> things like key unwrapping.
> I believe the underlying API is independent of any specific algorithm,
> by virtue of the same patterns being prevalent in other APIs and that
> they permit what I understand you to desire. I think it may just be
> some misunderstanding between us about why I believe that need is
> already filled in a (generic) way.
>
>> To reiterate: if we want to support key exchange of protected keys, the API
>> needs to treat this transaction as a first class API. The "exportable" flag
>> is insufficient for the task, unless we just dictate implementation behavior
>> that in the protected case we throw an error or some such developer
>> unfriendly thing.
> If a developer requests certain behaviour, and it's not supported (by
> policy, by underlying implementation, by design), I think it should
> cause an error.
>
> I'm honestly unsure whether or not it's reasonable to require that
> implementations MUST support key export *for the key algorithms
> defined in this spec* IF the user requests it. We could put that
> requirement in, but that will certainly immediately preclude a class
> of operations (eg: fully 'outsourcing' the crypto to a secure
> element), which means that the API cannot be conformantly implemented,
> even if everything ELSE could be supported.
>
> It's a reasonable concern, but I do not believe it necessitates the
> creation of Transaction objects - I think that's the KeyDerivation
> equivalent of a JOSE Message Signer - a combination of a series of
> low-level primitives and phases into one, cogently organized, logical
> result. You can polyfill that using the API proposed today, and
> nothing prevents that polyfilled API for being the basis of future,
> high-level standardization.
>
>>
>>> - More fundamentally, I think I have trouble with the idea of key
>>> exchange as a GSS-API like mechanism, which this appears to
>>> effectively be.
>>
>> You have me at a technical standards disadvantage; I had to go Google
>> GSS-API ;)
> Also, SASL, which I suspect at a protocol level is equally similar as
> the ProtectedKeyExchange proposed.
>
>>
>>>    The concept of these opaque bytes going through the
>>> application is, on some deep API level, a bit troubling. The fact that
>>> no such equivalent exists in any of the standards or APIs that I think
>>> are worth considering (PKCS#11 and CDSA as 'true standards',
>>> CryptoAPI, CNG, OpenSSL, BSafe - as "standard" APIs, to name a few) I
>>> think also highlights the specialist nature of this.
>>
>> I would like to think that this is a result of the work we're doing being
>> cutting edge. The fact that we don't trust all script code is a fairly
>> compelling reason to rethink old APIs, IMO.
> I can appreciate the position, but I don't think this is a case of
> "not cutting edge enough," but perhaps simply a miscommunication on
> why the needs (hopefully) are already met under the current API and
> how they've been successfully met over the past several decades.
>
>>
>>> While I understand this may be how you're currently doing things, I'm
>>> not sure this is something that should be supported - or at least, be
>>> considered as something under the 'optional' category that we wait to
>>> address until after we make sure the base level primitives are
>>> acceptable to everyone and implementable.
>>
>> Again, I would argue that protected key exchange -- in general terms --
>> should be in scope. Are you arguing against this as a high level objective
>> or is it more that this is considered a "Netflix proprietary" approach?
> So, there's two meanings of protected key exchange here
> - Protected from content script, but the content script is allowed to
> 'drive' the operation. I think this need is already met (as
> demonstrated by the pseudo-code)

This is what we're aiming for.

> - Protected from the user agent (as in, secure element provisioning),
> which I think is, at best, secondary features, but more likely out of
> scope in general.

I get this point. It's still somewhat unclear where this goal is 
incompatible with the "protected from content script" goal given that 
the underlying implementation could call out to a HW element. But that's 
more of a curiosity question.

>
> What I really, really, really want to avoid is trying to do something
> like "JavaScript GlobalPlatform", and I think the moment you start
> talking about "secure provisioning" or "protected key exchange",
> you've immediately entered such terminology. As discussed during
> chartering and in the past, such conversations immediately turn specs
> into multi-year efforts that will inevitably suffer and, for most
> practical/novel uses, be woefully complex and utterly unusable.

Nope. We're just trying to solve a problem in a pragmatic way while 
keeping the user / app as secure as we reasonably can.

> I also think it begins to touch into the 'high-level' concepts.
>
> Part of the reason that operations like key derivation are split into
> phases is because it's not at all uncommon for different algorithms to
> emerge that combine them in different ways. For example, X9.46 builds
> upon the DH phase 1 and phase 2, effectively adding a 'phase 3' of key
> expansion. However, ZRTP goes a different route, taking the DH
> Phase1/Phase2, but then adding in commits and the Confirm1/Confirm2
> phases. By de-composing these multi-step derivations into phases, we
> readily permit polyfill operations - for example, a UA could *ONLY*
> support Phase1/Phase2, and an application could polyfill in X9.46/ZRTP
> support as needed, since all of the primitives (hashing, macing) are
> there.
>
>>
>>> Fundamentally, I recognize that this is closely related to key
>>> wrap/unwrap, which are not yet specified, due to first needing to make
>>> sure that key import/export are the correct APIs, due to the close
>>> relationship between the two. I understand that key transport
>>> (import/export) and key agreement (which I believe is already
>>> accomodated) are part of our primary API features. And I can
>>> appreciate the desire for 'secure provisioning' of keys. However, my
>>> concern is that the practical use cases for such an API are more
>>> closely related to secure elements, smart cards, or other device
>>> specific behaviours, at least as I understand your proposal.
>>
>> Not quite right. In fact, we're interested in protected key exchange which
>> may never involve pre-provisioned keys, crypto hardware or smart cards at
>> all.
> In which case, your application would/should never request the
> 'exportable' flag, and your problem should be solved.

I believe this is one of the key points that I would like to make 
certain we agree on. Would I be correct in taking the above comment and 
expanding it in more detail:
* Our application would never request "exportable = true".
* If our application ever did request "exportable = true" the underlying 
implementation would throw an error.
* Every phase in our key exchange + session key derivation, including 
the final stage, would have a result which was an opaque handle to the 
underlying key data, inaccessible to the script code.
?

>
>>
>>> Similar to the aforementioned APIs, I would rather expose the
>>> primitives (eg: DH phase 1, DH phase 2, ZRTP part 1, ZRTP part 2),
>>> then trying to describe an entire flow, with all of the
>>> protocol-specific parameters (eg: all the data that flows into an RFC
>>> 6189 3.1.1 exchange). Trying to define an API for an entire ZRTP flow,
>>> as an example, is something that I think is the essential definition
>>> of high-level.
>>
>> Are you saying that we should not have a low level "Key Exchange" API which
>> is agnostic to algorithm? I would propose that this is exactly what we
>> should aim for and that Diffie-Hellman should be a SUGGESTED algorithm
>> rather than making it first class to the exclusion of other key exchange
>> methods, including perhaps key wrapping / unwrapping. Key wrapping, given
>> the proper API design, is something that an implementer can choose to do
>> with version 1.0 of the API without the specification directly addressing
>> that use case.
> We as a work-group previously identified DH as first-class (along with
> ECDH, which I explained on the previous call why it was not yet
> spec'd).
>
> However, as described elsewhere, it by no means precludes any other
> form of Key Exchange - protected or not - regardless of whether they
> build upon DH or some entirely different crypto primitive.
>
> While I understandably don't know much about Netflix's specific
> implementation, I do think that you could suitably accomplish what you
> desire with the current API. I would expect that the concerns, if any
> remain, would most likely be centered around 'simplicity' (high-level,
> opaque) vs 'complexity' (low-level, phases), and trying to make sure
> the API was a sufficient balance, while still respecting our low-level
> API ambitions.
>
> I think an opaque Key Exchange, as proposed, is ideologically
> equivalent to the "Box" and "Unbox" primitives we discussed when
> scoping our work. I don't think there is anything inherently flawed in
> them, but I think it's a high-level API that is full of strong
> politics and preferences, much like trying to decide (WebIDL, JWK, or
> ASN.1 - or all of the above).
>
>> We ran into this same sort of issue with the "Certificate" versus "generic
>> auth token" debate previously. I realize that we want things to be clear as
>> possible to implementers to get WebCrypto boot strapped, but I would not
>> want to exclude interesting use cases & implementations because the APIs are
>> too restrictive. Of course, I'm new to standards & the W3C so my philosophy
>> may simply be misinformed.
>>
>> Looking forward to your response.
>>
>> Mitch
>>
>>
>>> Regards,
>>> Ryan
>>>
>>>>
>>>> From: Mitch Zollinger [mailto:mzollinger@netflix.com]
>>>> Sent: Wednesday, August 15, 2012 10:28 PM
>>>> To: Vijay Bharadwaj
>>>> Cc: public-webcrypto@w3.org
>>>> Subject: Re: ACTION-22: Key export
>>>>
>>>>
>>>>
>>>> On 8/14/12 9:38 AM, Vijay Bharadwaj wrote:
>>>>
>>>> Mitch> As described during our f2f: we would like to use a KDF on a
>>>> Diffie-Hellman negotiated shared secret to create a session key (or
>>>> session
>>>> keys) where the raw session key is never allowed to be accessed by the
>>>> webapp.
>>>>
>>>>
>>>> I agree with the aim, but as discussed earlier I donít know of a way to
>>>> make
>>>> this work in general. In general, the output of a KDF is ďjust bytesĒ as
>>>> far
>>>> as the algorithm is concerned, so itís hard to see a way to pick some
>>>> bytes
>>>> from that output and designate them as ďspecialĒ (i.e. key material). I
>>>> suppose this case could be made to work if we applied additional
>>>> restrictions, but that may require a different API that takes in key or
>>>> secret handles rather than an ArrayBuffer.
>>>>
>>>>
>>>> Exactly. As described in the Netflix use case document, the idea is that
>>>> the
>>>> shared secret created by a DH exchange is one which the app is never
>>>> allowed
>>>> to access. There are clearly some pitfalls of implementation in this
>>>> model;
>>>> if a generic Diffie-Hellman was compatible with this DH+KDC model of key
>>>> creation, the server would not know the difference between a client which
>>>> was using DH with exportable shared secret and the special DH+KDC model.
>>>>
>>>> This requires a bit more design; there are ways of doing this that depend
>>>> on
>>>> other types of attestation (imagine that the underlying implementation
>>>> created a signature on the DH public component sent by the client to the
>>>> server only when the special DH+KDC model was invoked, for example), but
>>>> in
>>>> general, I still believe that the API can allow for this type of exchange
>>>> without even specifying the actual algorithms.
>>>>
>>>> ProtectedKeyExchange kex = KeyExchange("KeyExchange Algorithm Foo");
>>>> Uint8Array client_pub = kex.getPublic();
>>>> /* ...send client_pub to server... */
>>>> /* ...get server_pub from server... */
>>>> /* complete exchange, created keying material precursor inside of kex */
>>>> kex.exchange(server_pub);
>>>> /* get handle to shared secret */
>>>> Handle handle = kex.getSharedSecret();
>>>> /* derive a session key */
>>>> Key key = Key.create(KDC.get("MyKDCAlgorithm"), handle);
>>>>
>>>> I know in our offline conversation, you brought up some good points
>>>> around
>>>> FIPS compliance & only using the shared secret for a single key
>>>> derivation.
>>>> Despite that cautionary advice, is there something that would prevent us
>>>> from accomplishing the above?
>>>>
>>>>
>>>>
>>>>
>>>> Mitch> In terms of expected user interaction in a browser, is there some
>>>> idea of a key store password, where the user has to enter the password to
>>>> explicitly export a wrapped key? Or is this a click-through dialog box
>>>> where
>>>> the user simply clicks "Ok" and the webapp gains access to the raw key?
>>>>
>>>>
>>>>
>>>> I was imagining a situation where this is determined by the key itself.
>>>> Most
>>>> exportable keys would be exported with no user interaction at all, and
>>>> non-exportable keys would just fail. Keys stored on smart cards for
>>>> example
>>>> may require UI but that is imposed by the card not the UA.
>>>>
>>>>
>>>> Ok. I was getting this mixed up with Ryan's & Mark's conversation around
>>>> domain bound vs. domain authorized sites. With "domain authorization"
>>>> though, in your model, if the site that created the key created it with
>>>> exportable=true, then the second site could just export the key without
>>>> any
>>>> user interaction? (I don't think this is what you meant.)
>>>>
>>>> Mitch
>>>>
>>>>
>>>>
>>>>
>>>> From: Mitch Zollinger [mailto:mzollinger@netflix.com]
>>>> Sent: Monday, August 13, 2012 5:52 PM
>>>> To: public-webcrypto@w3.org
>>>> Subject: Re: ACTION-22: Key export
>>>>
>>>>
>>>>
>>>> On 8/13/2012 7:54 AM, Vijay Bharadwaj wrote:
>>>>
>>>>
>>>> Weíve gone around on this a few times, including at the f2f, so here is a
>>>> concrete proposal. Iím trying to find a balance between extensibility and
>>>> not loading up the API with a bunch of stuff, so feedback is welcome.
>>>>
>>>>
>>>>
>>>> I see the following use cases for key import/export:
>>>>
>>>> -          Create session key object from derived key bytes (using either
>>>> KDF or secret agreement): this would require raw key import
>>>>
>>>>
>>>> I would add:
>>>>    - Create session key object from derived key bytes, using KDF of
>>>> underlying
>>>> keying material, which does not allow raw key import / export.
>>>>
>>>> As described during our f2f: we would like to use a KDF on a
>>>> Diffie-Hellman
>>>> negotiated shared secret to create a session key (or session keys) where
>>>> the
>>>> raw session key is never allowed to be accessed by the webapp.
>>>>
>>>>
>>>>
>>>> -          Create key object from public key received from peer (for
>>>> asymmetric encryption or signature verification): this would require
>>>> public
>>>> key import, where the public key is likely ASN.1 encoded in many apps
>>>>
>>>> -          Export/import (wrapped) content encryption key for data
>>>> encryption: this could be just the wrapped key or something like a PKCS#7
>>>> RecipientInfo (which is ASN.1 encoded). Import/export requires a handle
>>>> to
>>>> the wrapping key.
>>>>
>>>> -          Export/import of private keys for distribution, with formats
>>>> like
>>>> PKCS#8.
>>>>
>>>>
>>>>
>>>>   From an API perspective, supporting export seems to be straightforward.
>>>> The
>>>> Key object needs an export (or wrap) method, which takes a target format
>>>> and
>>>> potentially a wrapping key as parameters.
>>>>
>>>>
>>>> In terms of expected user interaction in a browser, is there some idea of
>>>> a
>>>> key store password, where the user has to enter the password to
>>>> explicitly
>>>> export a wrapped key? Or is this a click-through dialog box where the
>>>> user
>>>> simply clicks "Ok" and the webapp gains access to the raw key?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> It seems to me there are two API models to support import. Either have an
>>>> ability to create an empty Key object, then invoke an import method on
>>>> that
>>>> object, or make it part of the construction of the Key object. I propose
>>>> the
>>>> latter, so that we donít complicate the state model of the Key object.
>>>>
>>>>
>>>>
>>>> So in WebIDL,
>>>>
>>>>
>>>>
>>>> interface Crypto {
>>>>
>>>>
>>>>
>>>> Ö other stuff Ö
>>>>
>>>>
>>>>
>>>> KeyGenerator importKey(DOMString format, ArrayBuffer keyBlob, optional
>>>> Key
>>>> wrappingKey=null);
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>> interface Key {
>>>>
>>>>
>>>>
>>>> Ö other stuff Ö
>>>>
>>>>
>>>>
>>>> KeyExporter exportKey(DOMString format, optional Key wrappingKey=null);
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>> Where KeyExporter is exactly like KeyGenerator but returns a value
>>>> instead
>>>> of a Key object.
>>>>
>>>>
>>>>
>>>> One big issue is what key formats should be supported. For symmetric keys
>>>> it
>>>> makes sense to support a raw format, but for asymmetric keys things are
>>>> more
>>>> complex. As has been brought up on other threads, many commonly-used
>>>> formats
>>>> are ASN.1 based and so it seems like supporting that would really help
>>>> interoperability. However, Iíd like to avoid a repeat of the mandatory
>>>> algorithms discussion. Any ideas here are welcome.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
Received on Sunday, 26 August 2012 08:07:25 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:01:25 UTC