- From: Oliver Hunt <oliver@apple.com>
- Date: Thu, 07 Nov 2013 14:53:56 -0800
- To: Ryan Sleevi <sleevi@google.com>
- Cc: "public-webcrypto-comments@w3.org" <public-webcrypto-comments@w3.org>
- Message-id: <281612E2-CA4B-46E4-904A-E5F6085BACDD@apple.com>
On Nov 7, 2013, at 2:34 PM, Ryan Sleevi <sleevi@google.com> wrote: > On Thu, Nov 7, 2013 at 2:20 PM, Oliver Hunt <oliver@apple.com> wrote: >> >> On Nov 7, 2013, at 1:56 PM, Ryan Sleevi <sleevi@google.com> wrote: >> >>> On Thu, Nov 7, 2013 at 1:32 PM, Oliver Hunt <oliver@apple.com> wrote: >>>> Hi all, I’ve been following the spec and various discussions regarding subtle crypto (and the WebCrypto spec in general) and i do have a few concerns. >>>> >>>> My opinion is that WebCrypto should make the simplest and most obvious thing to do be correct. That is the API should be a small as possible, and have only those primitives that are absolutely necessary for encryption, decryption, authenticity, and message verification. It should not require extra work, or separate steps for any of these — I would argue that any encryption perform should default to providing an content blob that is encrypted and includes at minimum verification, and should make it _hard_ to override that default. >>> >>> Greetings Oliver, >>> >>> I'm glad you're getting up to speed on this. This particular point has >>> been a point that's been discussed since the very beginning of the WG, >>> so there's certainly a lot of discussion here. >>> >>> In order to avoid rehashing much of the conversation, I'd like to pose >>> two hopefully simple questions: >>> >>> 1) What use cases do you see this supporting? >>> Context: The choice of use cases directly influences the choice of >>> crypto combinations. Do you choose pre-built compositions like NaCl or >>> KeyCzar, do you allow caller-specified combinations like JOSE, do you >>> handle agreement schemes like OTR/mpOTR, do you handle deniability >>> like Pond, or do you invent something new entirely? >> >> I need to think about this more. In the absence of an existing crypto standard or protocol that provides authenticity + verification as part of the standard transmission stream, i would prefer a new protocol to simply continuing the series of crypto apis that make it hard to do the right thing, and trivial to do it wrong. >> >>> >>> 2) How will you address the other use cases that have been identified >>> with great enthusiasm by a number of members in this WG, ranging from >>> certificate-based authentication schemes to supporting Javascript >>> Applications (whether via Extensions/SysApps/Win8-Metro apps or simple >>> hosted-in-a-page apps) with such a scheme? >> >> I would say step one of a new crypto api is to define a mechanism that handles the most basic use cases without making incorrect behaviour possible. > > I guess I was asking what you see as "the most basic use cases”. 1. Send a message and verify that it has not been modified 2. Send a message and authenticate the originator 3. Send an encrypted message, which must by default verify the data has not been modified (e.g. 1.), and personally I would prefer all such content also authenticate originator (although i recall hearing of cases where it’s not desired) > When you enumerate the use cases (and their corresponding threat > models), it becomes quickly apparent that in order to actually satisfy > the diversity of use cases, you really do need composable low-level > primitives. > > My fear with defining some mechanism like you describe is that we end > up in a world of VRML, when really what developers want (and need) is > WebGL. Even though the edges are unquestionably sharper (ha ha), it > provides a much more robust and usable API. > > That said, in the past discussions, there have been a number of calls > for proposals of such a high level API, along with use cases for it, > but so far, no one has stepped forward to offer such an API or justify > its necessity. Every low level API that currently exists has resulted in applications having serious security problems that turn up in applications sometimes years after they were shipped. The majority of developers who want to use encryption in their application are not aware of that potential problems in “correct” code, WebCrypto needs to support them as they are the majority of developers. Whether you like it or not, the majority of developers in the world (in any language or environment) are not experts in cryptography, or the potential pitfalls. Making a brand new API the replicates the major problems of all the existing APIs sounds like an API that is just asking to be misused. An crypto API should be tiny as it is not trying to do everything (as VRML did), and every additional API introduces a new way to screw things up. For instance, no matter what we do I fully expect to see someone using one of the JS implementations of gzip (or whatever) and then transmitting compressed content that includes attacker controlled data. Making it so that the basic API can be trivially misused before we even reach subtle issues like that, in the name of supporting things that have no standardised API (such as NaCl). > >> >> Supporting arbitrary crypto for non-standard schemes is secondary and can - if absolutely necessary - go under the unsafe namespace > > The conclusion reached during past discussions and face-to-face was > the opposite. That is, supporting arbitrary crypto is a primary > concern, as it allows a robust level of polyfills, and supporting > 'opionated' crypto-designs (whether the adoption of elements of > JOSE/JWS/JWK/JWT or of some new protocol) are secondary. > > There is certainly tension with an API that's impossible to get wrong, > but not of much use, and an API that is difficult to get right, but > rich enough to support a plethora of applications, and the WG has > repeatedly affirmed a commitment to the latter. Cryptography is used for some extraordinarily important purposes, some people’s lives depend on the applications they use implementing their cryptography correctly. This seems like a very good motivating factor for the simplest possible thing being the correct thing, even if it requires extra work on our part. >>>> I’m also weary of providing byte array representation of any of the core primitives as history has shown that doing so leads to developers creating timing attacks. >>> >>> I would hope you could elaborate on this, since the very design is to >>> reduce such timing attacks, by providing a sufficiently high level >>> abstraction that timing sensitive operations (such as checking the >>> padding bytes of a PKCS#1 message) are done within the implementation. >> >> Historically people have iterated byte streams to verify equality. Providing an API to do this correctly is fairly meaningless when there is an obvious mechanism (for(...) ) that allows developers to do the wrong thing means that they will. > > Certainly, constant-time and correct comparisons are one of the key > motivations for this API, as it cannot be (reliably) implemented > purely in JS. That is, while it may be possible to implement for a > specific UA based on internal knowledge of its JS implementation and > optimizer, a generic solution is not readily available. > > That said, such comparisons are designed to be unnecessary by the API. > This is why, for example, HMAC supports "Verify", even though one does > not traditionally think of such from an API. This allows the > implementation to "do the right thing", such as to do the > constant-time memory equality check. Why doesn’t the API require message digests and verify by default? Also, why is there a need to expose the byte array for anything. The problem here is that you’re expecting a developer to remember an API when they can simply use a for loop themselves. For instance OpenSSL provides verification APIs but the API inherently makes it possible to use a for loop, and developers do. And then those vulnerabilities were out of scope. > > I'm not sure what you see as the alternative to this, although perhaps > it's also conditioned on the presumption that the WG will define a > cryptographic protocol (which we said was out of scope) and expose > that via an API. If we were to use other data representation types > (eg: DOMString), we'd be back in the same boat, since people would no > doubt be doing string equality comparisons. No, i’m saying keys, digests, encrypted streams, etc should all be opaque, maybe with the raw data being exposed through APIs in the “unsafe" namespace. —Oliver
Received on Thursday, 7 November 2013 22:54:41 UTC