- From: John Lyle <john.lyle@cs.ox.ac.uk>
- Date: Mon, 08 Oct 2012 15:49:10 +0100
- To: public-webcrypto-comments@w3.org
On 05/10/12 16:33, Ryan Sleevi wrote: > On Mon, Oct 1, 2012 at 10:45 AM, John Lyle <john.lyle@cs.ox.ac.uk> wrote: >> (1) The use case "Protected Document Exchange" is fine, but implies that >> User Agents will be able to distinguish between different users when >> encrypted data is received. There are several contexts where this wont be >> the case (shared devices), so I suggest that this use case become more >> specific. What kind of documents or scenarios are intended? Furthermore, >> I'm not sure that the specification necessarily supports this use case >> unless we make quite a few further assumptions about how the user agent must >> protect keys, which was (as I understand) intentionally avoided. >> >> (2) I don't buy the Cloud Storage use case, I'm afraid. Johnny still can't >> encrypt his email [2] so I'm suspicious of any use case suggesting he might >> choose a key to encrypt his data. A better use case would make the role of >> the application (which I would expect to be more supportive and mask the use >> of cryptography and keys) clearer. > The use case is intentionally vague, but it's not at all unreasonable > to think that this may be mediated by the user agent (or an > extension). At this time, the use cases were attempting to be very > high level, as the charter calls out for the production of a separate > document to elaborate on use cases in more detail. Ok, I'm not familiar with this working group's approach to use cases. If they are meant to be vague, then that's fine. It would be helpful to see how the requirements for this API are derived from the use cases (e.g., the need to re-use the same key across multiple origins seems to come from 2.1) but that may not be your normal working practice, or is it in some other document? >> (4) I agree with ISSUE-33. Any automated way of spotting that keys or key >> materials are being misused should be seriously considered. Similarly, I'm >> assuming any attempt to misuse an encryption key for signing (or vice versa) >> could result in errors? If memory serves, the Trusted Computing Group >> specifications make the effort to dictate what operations each kind of key >> may be used for, it might be worth following their lead. > This is what the KeyUsage parameter is for. Yes, I appreciate that. I was checking that these usage parameters were _enforced_ as well as specified. The specification states that they indicate "what CryptoOperations may be used with this key", but the spec doesn't say anywhere what would happen if you provide a key with the wrong usage parameters. I admit that this is probably blindingly obvious. >> (5) Very minor grammatical error in section 4.1 - "avoids developers to care >> about" should be something like "avoids developers having to care about" >> >> (6) 5.1 - obtaining 'express permission' from users is impractical, >> considering the general usability of crypto systems. I don't recall seeing >> any use cases or details for why or when keys might be re-used by different >> origins in the specification, so it isn't clear why this is discussed or >> what the implications are. > Could you perhaps explain your concern further? "express permission" > is a typical requirement for APIs which may present some degree of > risk and utility to users - for example, pointer lock, geolocation, > web intents, etc. This may be a one-off request ("example.com wishes > to frob the whatnots") or it may be a per-operation request, depending > on user agent and implementation. I didn't do a very good job explaining myself here, apologies. I was concerned that 'express permission' would just mean that the user agent would query the user at runtime to ask whether they want to use a particular key with a certain origin. I find it difficult to imagine situations where such a dialogue would be helpful, as many people would struggle to understand the implications of granting or denying permission. Of course, this comes down to the fact that this specification is intentionally devoid of context - there may be situations where this makes sense if the user agent is clever enough. But were I implementing this API in a browser, I would have no idea how to ask for this consent: the best solution would depend entirely on the web application and environment, which is why a general-purpose browser might struggle to implement a sensible permission request. In comparison, if the origin (or out-of-band key issuer) responsible for creating a key was also required to indicate the origins it could be re-used with, that seems a more straightforward implementation task. But this depends on the problem that this advice is trying to solve: if it is just mitigating threats related to cross-origin communication, then "express permission" from a user seems excessive. Privacy and tracking concerns are another matter, of course, and it could be that the requirement is for a key to potentially be reused on _any_ origin. > > An example of keys being re-used in different origins is akin to the > TLS client auth case, in which a single certificate and private key is > used to negotiate security parameters with a number of independent > origins. That makes sense. So the requirement is that *any* key can potentially be shared with *all* origins? > >> (7) I think the 'security considerations for developers' in 5.2 could be >> improved. It is important to note that secure storage isn't guaranteed, but >> what *is* supposed to be guaranteed by user agents? Maybe nothing? Perhaps >> more details about the threat model this API is assumed to be operating in >> would make sense. For instance - does it make sense to use this API when the >> browser is considered relatively trustworthy, but other web applications are >> not? Or when the user and the web application trust each other? I think >> the specification is fine, but a bit more rationale would be useful here, as >> well as a definition of the agents/principals involved. > I think you're correct in that the guarantees provided to a web > application are minimal to non-existant, which is inherent in any form > of cryptographic system that isn't from-the-ground-up built on a > trusted platform. Just like using native crypto APIs on Windows or > Linux provides no guarantees you're actually performing crypto (eg: > DLL injection, library preloading), roughly the same model applies to > the web. > > What's not entirely clear to me is how you would suggest this be > improved. There can be any number of agents/actors involved here, > although the minimal collection is the user agent, the user, and the > web application. The user presumes full trust in the user agent, and > varying degrees of trust in the web application, and the web > application cannot trust the user or the user agent. Upon reflection, I think this section is probably fine. My fear was that a developer seeking to implement the use cases described in section 2 might assume that using this API will solve all of their security problems for them, when the reality is obviously more complicated. I was envisioning a whole load of badly implemented web applications making outlandish and unreasonable security claims. However, that's a much wider problem than could be addressed in this section. The granularity of this API makes it only appropriate for use by people with a security background, I think, but that's not very surprising. >> (11) I agree with ISSUE-31 - particularly for some of the potential OOB >> provision situations I can see that being able to discover a key based on >> custom attributes would be useful. Of course this might make fingerprinting >> a bigger issue. > Discovery of keys that exist (rather than keys created by/inside of an > origin) are almost certainly operations which require user consent. > What has prevented ISSUE-31 is trying to decide some canonical way to > query for these parameters - which nominally requires enumerating the > existing parameters that would be useful to query on. > > Do you perhaps have examples of attributes you would wish to use to > discover keys - either common or custom? > I would like to be able to query (at least in theory) for a key held in a Trusted Platform Module that is bound to particular platform configuration. E.g., it's possible with a TPM (again, in theory) to create a key that is only accessible when the platform boots into a certain operating system. I'd like to be able to request one of those in my web application. This could be done by searching for keys with appropriate attributes (where attributes would contain details about the restrictions placed on the use of the key by the TPM). However, because this API does not (as far as I can tell) provide a way for a web application to obtaining key validation data (some kind of key certificate guaranteeing the key's attributes) then this isn't useful even you could discover keys based on custom attributes. Another way to satisfy my use case would be for a user agent to implement a KeyGenerator for the TPM. This could generate a new key on demand with the desired properties, create a key validation certificate and make it available as one of the key's custom attributes. However, the KeyGenerator's 'generate' method doesn't accept arguments, so that doesn't work either. I would want to pass the key's parameters and a nonce for verification to the generate method. This means the only option is to provision keys outside the user agent and keep a record of their ID for later use. I appreciate that this is a fairly niche use case and probably out of scope. Best wishes, John
Received on Monday, 8 October 2012 14:49:29 UTC