Re: WebCrypto API comments

On Mon, Oct 8, 2012 at 7:49 AM, John Lyle <john.lyle@cs.ox.ac.uk> wrote:
> On 05/10/12 16:33, Ryan Sleevi wrote:
>>
>> On Mon, Oct 1, 2012 at 10:45 AM, John Lyle <john.lyle@cs.ox.ac.uk> wrote:
>>>
>>> (1) The use case "Protected Document Exchange" is fine, but implies that
>>> User Agents will be able to distinguish between different users when
>>> encrypted data is received.  There are several contexts where this wont
>>> be
>>> the case (shared devices), so I suggest that this use case become more
>>> specific.  What kind of documents or scenarios are intended?
>>> Furthermore,
>>> I'm not sure that the specification necessarily supports this use case
>>> unless we make quite a few further assumptions about how the user agent
>>> must
>>> protect keys, which was (as I understand) intentionally avoided.
>>>
>>> (2) I don't buy the Cloud Storage use case, I'm afraid.  Johnny still
>>> can't
>>> encrypt his email [2] so I'm suspicious of any use case suggesting he
>>> might
>>> choose a key to encrypt his data.  A better use case would make the role
>>> of
>>> the application (which I would expect to be more supportive and mask the
>>> use
>>> of cryptography and keys) clearer.
>>
>> The use case is intentionally vague, but it's not at all unreasonable
>> to think that this may be mediated by the user agent (or an
>> extension). At this time, the use cases were attempting to be very
>> high level, as the charter calls out for the production of a separate
>> document to elaborate on use cases in more detail.
>
>
> Ok, I'm not familiar with this working group's approach to use cases.  If
> they are meant to be vague, then that's fine.  It would be helpful to see
> how the requirements for this API are derived from the use cases (e.g., the
> need to re-use the same key across multiple origins seems to come from 2.1)
> but that may not be your normal working practice, or is it in some other
> document?

Agreed. The goal should be a document similar to the MediaStream
Capture Scenarios document -
http://dvcs.w3.org/hg/dap/raw-file/tip/media-stream-capture/scenarios.html

>
>
>>> (4) I agree with ISSUE-33.  Any automated way of spotting that keys or
>>> key
>>> materials are being misused should be seriously considered.  Similarly,
>>> I'm
>>> assuming any attempt to misuse an encryption key for signing (or vice
>>> versa)
>>> could result in errors?  If memory serves, the Trusted Computing Group
>>> specifications make the effort to dictate what operations each kind of
>>> key
>>> may be used for, it might be worth following their lead.
>>
>> This is what the KeyUsage parameter is for.
>
>
> Yes, I appreciate that.  I was checking that these usage parameters were
> _enforced_ as well as specified.   The specification states that they
> indicate "what CryptoOperations may be used with this key", but the spec
> doesn't say anywhere what would happen if you provide a key with the wrong
> usage parameters. I admit that this is probably blindingly obvious.

Anything that relies on being "blindingly obvious" will inevitably
become a source of mis-implementation.

Filed https://www.w3.org/Bugs/Public/show_bug.cgi?id=19416 to clarify
this in the next draft.

>
>
>
>>> (5) Very minor grammatical error in section 4.1 - "avoids developers to
>>> care
>>> about" should be something like "avoids developers having to care about"
>>>
>>> (6) 5.1 - obtaining 'express permission' from users is impractical,
>>> considering the general usability of crypto systems. I don't recall
>>> seeing
>>> any use cases or details for why or when keys might be re-used by
>>> different
>>> origins in the specification, so it isn't clear why this is discussed or
>>> what the implications are.
>>
>> Could you perhaps explain your concern further? "express permission"
>> is a typical requirement for APIs which may present some degree of
>> risk and utility to users - for example, pointer lock, geolocation,
>> web intents, etc. This may be a one-off request ("example.com wishes
>> to frob the whatnots") or it may be a per-operation request, depending
>> on user agent and implementation.
>
>
>
> I didn't do a very good job explaining myself here, apologies.
>
> I was concerned that 'express permission' would just mean that the user
> agent would query the user at runtime to ask whether they want to use a
> particular key with a certain origin.  I find it difficult to imagine
> situations where such a dialogue would be helpful, as many people would
> struggle to understand the implications of granting or denying permission.
> Of course, this comes down to the fact that this specification is
> intentionally devoid of context - there may be situations where this makes
> sense if the user agent is clever enough.  But were I implementing this API
> in a browser, I would have no idea how to ask for this consent: the best
> solution would depend entirely on the web application and environment, which
> is why a general-purpose browser might struggle to implement a sensible
> permission request.

I think there's two points here - can a user agent show a dialog
(unquestionably, yes), and can a user make an appropriate
understanding of what the dialog is asking (... less so - Johnny can't
encrypt, etc)

Usable Security is a "Hard Problem", and I'll be quick to admit that I
don't have any perfect solution in mind. I think the comparison that
is intended here is the existing "select your certificate" dialogs as
used by user agents for TLS client auth or for e-mail signing.

Simply exposing the certificates, to any origin, without any user
consent is troubling, since certificates may reveal information about
the user (eg: government issued ID). Likewise, requiring user consent
to use any cryptographic functions at all is problematic, since a
number of use cases don't require any access at all to the users'
certificates. So the balance struck was that, if an origin wishes to
use some (pre-existing - whether by OS or from some other origin)
certificate, it must give some set of criteria to the UA, and the UA
can then respond appropriately (showing a dialog, outright rejecting,
etc)

>
> In comparison, if the origin (or out-of-band key issuer) responsible for
> creating a key was also required to indicate the origins it could be re-used
> with, that seems a more straightforward implementation task.  But this
> depends on the problem that this advice is trying to solve: if it is just
> mitigating threats related to cross-origin communication, then "express
> permission" from a user seems excessive.  Privacy and tracking concerns are
> another matter, of course, and it could be that the requirement is for a key
> to potentially be reused on _any_ origin.

Right. If the set of origins is constrained, that's certainly
preferable. However, this isn't always the case - again, consider the
use case of the government issued ID. The origin of every site that
may wish to know that information is unbounded - services may come and
go, banks may be rebranded, sites reorganized on suborigins, etc. In
the SSL/TLS model, *any* SSL peer can request a particular certificate
issuer (or request *any* issuer), and that's a model we're wishing to
permit here as well.

>
>
>>
>> An example of keys being re-used in different origins is akin to the
>> TLS client auth case, in which a single certificate and private key is
>> used to negotiate security parameters with a number of independent
>> origins.
>
>
> That makes sense. So the requirement is that *any* key can potentially be
> shared with *all* origins?

This has been an area of active discussion. The intent is that some
origins know their keys will never be useful in any other context (eg:
it's only applicable to that app at that origin), while other keys may
be useful for multiple origins (the <keygen> equivalent of one origin
provisioning a key for arbitrary origins, or pre-provisioned keys like
client auth)

But I think your general statement holds - yes, any key MAY be shared
with multiple origins, but it's not required that all keys MUST be
shared.

>
>
>
>>
>>> (7) I think the 'security considerations for developers' in 5.2 could be
>>> improved.  It is important to note that secure storage isn't guaranteed,
>>> but
>>> what *is* supposed to be guaranteed by user agents?  Maybe nothing?
>>> Perhaps
>>> more details about the threat model this API is assumed to be operating
>>> in
>>> would make sense. For instance - does it make sense to use this API when
>>> the
>>> browser is considered relatively trustworthy, but other web applications
>>> are
>>> not?  Or when the user and the web application trust each other?  I think
>>> the specification is fine, but a bit more rationale would be useful here,
>>> as
>>> well as a definition of the agents/principals involved.
>>
>> I think you're correct in that the guarantees provided to a web
>> application are minimal to non-existant, which is inherent in any form
>> of cryptographic system that isn't from-the-ground-up built on a
>> trusted platform. Just like using native crypto APIs on Windows or
>> Linux provides no guarantees you're actually performing crypto (eg:
>> DLL injection, library preloading), roughly the same model applies to
>> the web.
>>
>> What's not entirely clear to me is how you would suggest this be
>> improved. There can be any number of agents/actors involved here,
>> although the minimal collection is the user agent, the user, and the
>> web application. The user presumes full trust in the user agent, and
>> varying degrees of trust in the web application, and the web
>> application cannot trust the user or the user agent.
>
>
> Upon reflection, I think this section is probably fine.  My fear was that a
> developer seeking to implement the use cases described in section 2 might
> assume that using this API will solve all of their security problems for
> them, when the reality is obviously more complicated.  I was envisioning a
> whole load of badly implemented web applications making outlandish and
> unreasonable security claims.  However, that's a much wider problem than
> could be addressed in this section.  The granularity of this API makes it
> only appropriate for use by people with a security background, I think, but
> that's not very surprising.

Even with a "Perfect(tm)" high level API, I think there's still an
inordinantly high risk of people expecting this to be magic crypto
pixie dust. Any time "security" is under discussion, every snake oil
salesperson in a hundred miles suddenly shows up - and I don't think
we'll escape this.

But yes, improving the security considerations has been an active area
of exploration, particularly following the public feedback, so I
greatly appreciate you raising this concern as others have.

>
>
>
>
>>> (11) I agree with ISSUE-31 - particularly for some of the potential OOB
>>> provision situations I can see that being able to discover a key based on
>>> custom attributes would be useful.  Of course this might make
>>> fingerprinting
>>> a bigger issue.
>>
>> Discovery of keys that exist (rather than keys created by/inside of an
>> origin) are almost certainly operations which require user consent.
>> What has prevented ISSUE-31 is trying to decide some canonical way to
>> query for these parameters - which nominally requires enumerating the
>> existing parameters that would be useful to query on.
>>
>> Do you perhaps have examples of attributes you would wish to use to
>> discover keys - either common or custom?
>>
>
> I would like to be able to query (at least in theory) for a key held in a
> Trusted Platform Module that is bound to particular platform configuration.
> E.g., it's possible with a TPM (again, in theory) to create a key that is
> only accessible when the platform boots into a certain operating system.
> I'd like to be able to request one of those in my web application.  This
> could be done by searching for keys with appropriate attributes (where
> attributes would contain details about the restrictions placed on the use of
> the key by the TPM).

Yup. This is a use case that we want to support as well.

>
> However, because this API does not (as far as I can tell) provide a way for
> a web application to obtaining key validation data (some kind of key
> certificate guaranteeing the key's attributes) then this isn't useful even
> you could discover keys based on custom attributes.

No, this is the "We need a good proposal" issue. Wan-Teh Chang raised
ISSUE-31 ( http://www.w3.org/2012/webcrypto/track/issues/31 ) about
this, which I had some concerns about.

However, as a rough sketch, is that the sort of API you imagine?

>
> Another way to satisfy my use case would be for a user agent to implement a
> KeyGenerator for the TPM.  This could generate a new key on demand with the
> desired properties, create a key validation certificate and make it
> available as one of the key's custom attributes.  However, the
> KeyGenerator's 'generate' method doesn't accept arguments, so that doesn't
> work either.  I would want to pass the key's parameters and a nonce for
> verification to the generate method.

Providing provable/trusted key generation to an origin is not really
possible, as a general limitation of the trust model. It's also one
that is encumbered with an incredible amount of IPR and vastly
different requirements. GlobalPlatform, SKS, CertEnroll, etc are all
testament to this problem.

>
> This means the only option is to provision keys outside the user agent and
> keep a record of their ID for later use.
>
> I appreciate that this is a fairly niche use case and probably out of scope.
>
> Best wishes,
>
> John

The above has so far been the recommended solution. Key
*provisioning*, particularly for secure elements, is largely
hand-waved as out-of-scope (as it requires WG and sub-WG of it own to
find any sort of common agreement, and ends up more enterprise-y than
OAuth2). However, once you have such a key, being able to use it via a
web origin should "Just Work" as if the origin had directly created it
(mod the above security considerations about origin-authorized-by-user
keys and origin-specified-during-keygen keys)

Received on Tuesday, 9 October 2012 23:10:41 UTC