W3C home > Mailing lists > Public > public-webcrypto-comments@w3.org > October 2012

Re: WebCrypto API comments

From: Ryan Sleevi <sleevi@google.com>
Date: Fri, 5 Oct 2012 08:33:07 -0700
Message-ID: <CACvaWvbYuDCmX0w-EDD1jhqV0fvz6gyZvWUOtVJvN0gjncUzEQ@mail.gmail.com>
To: John Lyle <john.lyle@cs.ox.ac.uk>
Cc: public-webcrypto-comments@w3.org
On Mon, Oct 1, 2012 at 10:45 AM, John Lyle <john.lyle@cs.ox.ac.uk> wrote:
> Dear W3C Web Crypto Working Group,
> I have just read the Sept. 13 Working Draft after being forwarded the
> request for feedback by Virginie.  I think that it is remarkably clear and
> easy to follow and it has been designed at a reasonable level of abstraction
> (at least, roughly the level I expected).  At a first glance it seems not
> dissimilar to javax.crypto and slightly more abstract than the crypto API
> provided by middleware such as nodejs, which are likely to be the most
> familiar prior art for many developers.
> I have some comments, and while many are quite general I primarily have
> out-of-band provisioning of keys in mind.  Apologies if some of these
> comments go back over ground you have already discussed. I should also add a
> disclaimer - I know enough about cryptography to know that I can't comment
> on specific algorithm usage details or weaknesses.
> Comments:
> (1) The use case "Protected Document Exchange" is fine, but implies that
> User Agents will be able to distinguish between different users when
> encrypted data is received.  There are several contexts where this wont be
> the case (shared devices), so I suggest that this use case become more
> specific.  What kind of documents or scenarios are intended?  Furthermore,
> I'm not sure that the specification necessarily supports this use case
> unless we make quite a few further assumptions about how the user agent must
> protect keys, which was (as I understand) intentionally avoided.
> (2) I don't buy the Cloud Storage use case, I'm afraid.  Johnny still can't
> encrypt his email [2] so I'm suspicious of any use case suggesting he might
> choose a key to encrypt his data.  A better use case would make the role of
> the application (which I would expect to be more supportive and mask the use
> of cryptography and keys) clearer.

The use case is intentionally vague, but it's not at all unreasonable
to think that this may be mediated by the user agent (or an
extension). At this time, the use cases were attempting to be very
high level, as the charter calls out for the production of a separate
document to elaborate on use cases in more detail.

Regardless though, the use cases were examples of what types of
applications *can* be developed, not necessarily what will be
developed or how things should behave. In part, this is because
user-security interaction is hard, and I don't think our WG is
equipped for that Sisyphean task.

> (3) I wondered whether there was any plan to integrate use cases based
> around keys created in TLS-OBC [3]?  I guess this is out of scope.

Interactions with TLS (such as deriving keys from exported keying
material) are currently listed under secondary objectives, and at the
current pace of the WG, seem unlikely to be addressable in this
iteration. TLS-OBC are marginally better than keying material exports,
since OBCs are associated with origins rather than individual
connections (for which there may be many with a given resource load),
but I think the in-flux nature of OBC alone is enough to give pause
for now.

That said, I am individually very supportive of exposing OBCs in some form.

> (4) I agree with ISSUE-33.  Any automated way of spotting that keys or key
> materials are being misused should be seriously considered.  Similarly, I'm
> assuming any attempt to misuse an encryption key for signing (or vice versa)
> could result in errors?  If memory serves, the Trusted Computing Group
> specifications make the effort to dictate what operations each kind of key
> may be used for, it might be worth following their lead.

This is what the KeyUsage parameter is for.

> (5) Very minor grammatical error in section 4.1 - "avoids developers to care
> about" should be something like "avoids developers having to care about"
> (6) 5.1 - obtaining 'express permission' from users is impractical,
> considering the general usability of crypto systems. I don't recall seeing
> any use cases or details for why or when keys might be re-used by different
> origins in the specification, so it isn't clear why this is discussed or
> what the implications are.

Could you perhaps explain your concern further? "express permission"
is a typical requirement for APIs which may present some degree of
risk and utility to users - for example, pointer lock, geolocation,
web intents, etc. This may be a one-off request ("example.com wishes
to frob the whatnots") or it may be a per-operation request, depending
on user agent and implementation.

An example of keys being re-used in different origins is akin to the
TLS client auth case, in which a single certificate and private key is
used to negotiate security parameters with a number of independent

> (7) I think the 'security considerations for developers' in 5.2 could be
> improved.  It is important to note that secure storage isn't guaranteed, but
> what *is* supposed to be guaranteed by user agents?  Maybe nothing?  Perhaps
> more details about the threat model this API is assumed to be operating in
> would make sense. For instance - does it make sense to use this API when the
> browser is considered relatively trustworthy, but other web applications are
> not?  Or when the user and the web application trust each other?  I think
> the specification is fine, but a bit more rationale would be useful here, as
> well as a definition of the agents/principals involved.

I think you're correct in that the guarantees provided to a web
application are minimal to non-existant, which is inherent in any form
of cryptographic system that isn't from-the-ground-up built on a
trusted platform. Just like using native crypto APIs on Windows or
Linux provides no guarantees you're actually performing crypto (eg:
DLL injection, library preloading), roughly the same model applies to
the web.

What's not entirely clear to me is how you would suggest this be
improved. There can be any number of agents/actors involved here,
although the minimal collection is the user agent, the user, and the
web application. The user presumes full trust in the user agent, and
varying degrees of trust in the web application, and the web
application cannot trust the user or the user agent.

> (8) I have a more high-level concern that this API could be misused as a way
> of trying to push liability for security onto users and away from service
> providers.  As in, by using this API a service provider/developer might feel
> that it is then up to the *user* to store their keys safely and they can
> disclaim all responsibility.  This would be unfair.  There's not that much
> that can be done in a specification to prevent this, but I think it is worth
> bearing in mind when writing the 'security considerations for developers'
> section.

There are, unfortunately, a number of ways this API can be "abused"
through otherwise positive features. I'm not sure how successfully we
can or should codify those somewhat subjective viewpoints in the spec,
but certainly, this is something to highlight - that keys can be
easily lost or compromised through means outside of the user agent,
and the only attempted security is that they won't be easily lost or
compromised from within the user agent.

> (9) Section 11.1 - The specification would be improved with a state machine
> diagram as well as the text.  I also found it odd that the "processing"
> state was described as "ready to process data".  I would intuitively
> assuming "processing" meant "busy". Perhaps this could be rephrased?
> (10) How does a web application discover what algorithms are supported by
> the user agent?  I may be have overlooked something, but I couldn't see any
> examples.

It was decided to explicitly not support this functionality, since the
algorithms supported are dependent on the key being used and the
operation being attempted. A key that doesn't support signing may only
support encryption algorithms, for example, and a key that is stored
in a secure element may support a different set of algorithms than a
key that is stored on disk and managed by the user agent.

The canonical way to enumerate algorithms is to obtain a key, and
attempt to perform operations with it. This is, admittedly, not ideal.

> (11) I agree with ISSUE-31 - particularly for some of the potential OOB
> provision situations I can see that being able to discover a key based on
> custom attributes would be useful.  Of course this might make fingerprinting
> a bigger issue.

Discovery of keys that exist (rather than keys created by/inside of an
origin) are almost certainly operations which require user consent.
What has prevented ISSUE-31 is trying to decide some canonical way to
query for these parameters - which nominally requires enumerating the
existing parameters that would be useful to query on.

Do you perhaps have examples of attributes you would wish to use to
discover keys - either common or custom?

> I hope this feedback is useful.  I am happy to discuss or elaborate further.
> Best wishes,
> John
> --
> John Lyle
> Research Assistant
> Department of Computer Science, University of Oxford
> http://www.cs.ox.ac.uk/people/john.lyle/
> Part of the webinos project - http://webinos.org/bio-john/
> [1] http://www.w3.org/TR/2012/WD-WebCryptoAPI-20120913/
> [2] http://www.gaudior.net/alma/johnny.pdf
> [3] http://tools.ietf.org/html/draft-balfanz-tls-obc-00
Received on Friday, 5 October 2012 15:33:36 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:12:48 UTC