Re: On Crypto API Safety in the Hands of Unskilled Developers

On Thu, Mar 28, 2013 at 7:45 PM, Richard Barnes <> wrote:

>>> -- Add a toString() method (or something similar) to CryptoOperation
>> That's so anti-idiomatic to the way the web platform works. Why not
>> call it .toJOSE(), since effectively it becomes a format for saying
>> "This encryption algorithm, under these parameters"? I know you've
>> advocated for JOSE's use of SPI, which is effectively that this is,
>> but that only further highlights that we're talking about something
>> higher level.
>> I further don't believe this is actually borne out by the real-world
>> use cases for this API, and instead for a notion of how developers
>> "might" use this, and how it "might" be easier for them. This is such
>> an abstract concept, and everyone has their own opinions about who the
>> 'ideal' user is, that I'd much rather focus on the actual concrete use
>> cases and identify
>> 1) Does this address an unmet need?
>> 2) Is this syntactic sugar?
>> Look, compare this discussion to jQuery, Moo, Prototype, YUI, etc.
>> They all build a wide variety of sugar on top of the DOM that allows
>> developers to use different idiomatic approaches - but all yield the
>> same results.
>> I'm sympathetic to the notion that there should be a "right" way, if
>> only because it's convenient, but there's not some "right" way that's
>> magically going to lead to secure code, which is the crux of the
>> justification.
> I'm actually not being a JOSE partisan here.  Technically, the only requirement for this serialization is that it be something that the API can turn back into an object from which it can get algorithm, key, and data parameters.  (This might argue for making a separate "CryptoResult" object instead of hanging these things on CryptoOperation, but that's a minor point.)  JOSE seems like an obvious choice to me, but it could just as well be an opaque string that only a particular browser instance can interpret.
> If this were Java, this would just be the Serializable interface.  This is not a far leap from Clonable, which we already have for keys, algorithms, and data individually (Key objects, AlgorithmIdentifier dictionaries, and ArrayBufferViews).

This isn't Java, and it is a huge leap from the structured clone,
which is solving a very different problem than the issue you've
described here.

> Really, it might be enough here to just create a CryptoResult object (== alg+key+data), and make it Clonable.  That way, applications could at least store cryptographically protected objects locally.  A serialization of this format would help address applications with server-side storage, but I'll admit this is a secondary consideration.

And that's part of why this is misguided - When you actually look at
the API *and* the operating space, you quickly realize that the
ability to *exchange* messages is a key part of this API. Having
opaque, browser-only storage that *only* works with keys delivered via
a remote party does not address any of the use cases we've identified
- nor any of the many more that we've discussed.

That part is not something you bolt-on after the fact - if you're
going to pretend you have persistence, you need an interoperable,
exchangeable persistence. JOSE hopes to offer that - and that's why a
high-level API is relevant.

Arguably, if you're going to continue down this path, the best thing
to do is work with Arun and show actual use cases, then demonstrate
how this is the better solution. You're proposing piece-meal changes
here that holistically leave the API thoroughly inconsistent.

>>> I've asked before for examples of how these are harmful.  Maybe I'm forgetting some, but the worst case I remember you bringing up is that default parameters could lock browsers into a choice.  As I explained in the essay below, that is a problem, but it's one that can be addressed by also making it easy for developers to capture all the data they need.
>> Which is, again, a task of an overall format and protocol, not an API.
>> Look, fundamentally we're talking KeyCzar/NaCL vs OpenSSL/NSS - a
>> point of debate we've long since settled, and which this is just
>> effectively the same conversation.
> I don't think anyone has ever complained that OpenSSL or NSS was too easy to use.  :)  Keep in mind that for exactly that reason, OpenSSL includes two simplified interfaces (CMS_encrypt and EVP_seal) that do generate parameters for you.

Thank you for highlighting what I've said from the start, Richard.

CMS_encrypt, the "simplified" interface you propose - is exactly what
I mean when we talk about high-level. The very output is CMS messages
- the spiritual equivalent to JOSE - and we've seemed to reach
consensus some time ago that JOSE represents a *high* level API.

> You're also posing a false dichotomy here.  I'm not proposing that we remove functionality from the API.  All I'm saying is that we shouldn't force developers to touch all of the knobs all of the time.  They should only have to care about advanced settings if they're doing advanced things.
> So in fact, I'm actually proposing what OpenSSL / libcrypto offers: Fine-grained control when you want it (EVP_CipherInit, etc.), simpler things when you don't.

So a low-level API

And a high-level API

> Sure there are trade-offs, but to first order it's not that hard to make reasonably safe, widely-interoperable choices:
> RsaSsaParams.hash - Just use SHA-256
> EcKeyGenParams.namedCurve - Use P-384
> Etc.

I absolutely, positively do not want to engage in a "which algorithm
should we use" discussion - a position I have thoroughly and
repeatedly re-iterated since our first meeting. However, just to point
out to you exactly why such discussions are misguided:

1) Fails to take into consideration that, on 64-bit machines, SHA-512
can be significantly faster than SHA-256
2) Likewise, performance considerations for P-256 vs P-384. Your
choice seems entirely arbitrary, given that NIST's recommendation for
curve selection has P-256 well beyond 2030 as the minimal level of

The point being is that there are a huge number of tradeoffs here, and
despite the assertion that "browser vendors know best", the choice of
cryptography is a much more complex task than sprinkling magic
security dust - and involves and requires a thorough understanding of
the use case and threats.

> Yes, there are subtle trade-offs that one can get caught up in, but these choices are sufficient for a wide variety of applications.  And if an application wants to manage the trade-offs, it can override the UA's choices.
> The real question is who we think is best suited to make these choices.  The current API makes the assumption that developers will thoughtfully consider the trade-offs and come to good decisions.  I don't know how anyone can take that assumption seriously.  Even otherwise well-qualified web developers can't be expected to understand the nuances of cryptographic parameter selection.
> What I'm proposing is that much like the WebCrypto API allows application code to benefit from high-quality crypto code inside the browser, we allow developers to benefit from the high-quality cryptographic expertise inside the browser vendors.
> --Richard

Respectfully, I'm saying that cryptographic expertise is not
generalizable to arbitrary applications, *short of* a high-level API.

And, as I've said elsewhere, even the selection of a high-level API
requires a thorough understanding of the use cases and guarantees
provided. What KeyCzar provides to API consumers is different than
what NaCL provides, and is different than what JOSE provides, and is
different than [insert arbitrary high level API here]

These are all valid discussions, but we should focus on the low-level
API, and I'm certain that this conversation is very much an aspect of
"high-level", for the reasons described above.

Received on Friday, 29 March 2013 18:15:14 UTC