Re: WebCrypto Security Analysis

On Tue, Mar 25, 2014 at 10:49 AM, Aymeric Vitte <vitteaymeric@gmail.com>wrote:

>  This thread shows that maybe someone should then ask Mozilla to change
> its policy.
>

Sorry, which policy?




>
> As explained in [1] Presentation "Why the main page is not using https",
> we are forced to use http to load the main page (which then loads the js
> code with https, a kind of artifice...), because we can not use ws with
> https, this is far more insecured than loading the page with https and use
> ws, so there is a "secure" mechanism on top of it, with a story of keys
> that only the users and the server know and that are never sent, currently
> this is not extraordinarly difficult to break by a MITM, and even to
> retrieve the keys, we don't really care because we will remove this, the
> target phase it a serverless P2P using WebRTC (whose insecure self-signed
> certificates use for DTLS will be secure by the Tor protocol) and the code
> is a standalone js file that can be retrieved from third parties and that
> can be checked, it's unlikely that they are all compromised.
>
> The project is using SSL/TLS over WS (and not the contrary), that's a js
> implementation, then it raises again the need of a secure SSL/TLS
> implementation in WebCrypto.
>
> In our case (like I believe in all cases) this just insures that you are
> talking with the one with whom you have established the SSL/TLS connection,
> which is enough, because we don't care if the anonymizer peer is a MITM, he
> can only know about the first hop, not what will happen next and it's
> unlikely that all the peers are MITM.
>
> This raises again too the certificates management issue for WebCrypto,
> because this goes with SSL/TLS.
>
> Not talking about ISSUE-22 which just means that our implementation can
> not be moved entirely to WebCrypto for hash.
>
> Maybe I missed some updates, I don't know the implementation status for
> Google, Mozilla and Microsoft, but maybe one point of interest for INRIA
> study could be: should I move Peersm project to WebCrypto when it's there?
>
> Of course the answer is likely to be yes, but for example why should I
> trust the WebCrypto prng in browser X? What if it depends on Windows?
> What is the process to make sure that browsers are implementing WebCrypto
> with no possibilities of leaks?
>
> The point here is not to restart the same discussions but if this can give
> some ideas to INRIA...
>
> Regards
>
> Aymeric
>
> [1] http://www.peersm.com
>
> Le 21/03/2014 20:58, Mark Watson a écrit :
>
>
>
> Sent from my iPhone
>
> On Mar 21, 2014, at 11:58 AM, Ryan Sleevi <sleevi@google.com> wrote:
>
>
>
>
> On Fri, Mar 21, 2014 at 11:53 AM, Richard Barnes <rlb@ipv.sx> wrote:
>
>>   On Fri, Mar 21, 2014 at 2:38 PM, Ryan Sleevi <sleevi@google.com>wrote:
>>>
>>>   On Fri, Mar 21, 2014 at 11:34 AM, Mark Watson <watsonm@netflix.com>wrote:
>>>>
>>>>    On Fri, Mar 21, 2014 at 9:51 AM, Ryan Sleevi <sleevi@google.com>wrote:
>>>>
>>>>>
>>>>> On Mar 21, 2014 9:18 AM, "Mark Watson" <watsonm@netflix.com> wrote:
>>>>> >
>>>>> >
>>>>> >
>>>>> >
>>>>> > On Thu, Mar 20, 2014 at 1:01 PM, Ryan Sleevi <sleevi@google.com>
>>>>> wrote:
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> On Wed, Mar 19, 2014 at 7:40 AM, Kelsey Cairns <
>>>>> kelsey.cairns@inria.fr> wrote:
>>>>> >>>
>>>>> >>> Dear W3C Crypto API WG,
>>>>> >>>
>>>>> >>> Here at INRIA we're starting a security analysis on the current
>>>>> draft
>>>>> >>> of the Crypto API, co-funded by INRIA and W3C. The idea is to try
>>>>> to
>>>>> >>> get some results in before the end of the last call period.
>>>>> >>
>>>>> >>
>>>>> >> Could you define what your actual goal is with this security
>>>>> analysis?
>>>>> >>
>>>>> >> Typically, one does a security analysis of a protocol - does it
>>>>> live to the expected goals, and provide the expected assurances. WebCrypto
>>>>> itself provides many algorithmic building blocks, and (with the exception,
>>>>> arguably, of Wrap/Unwrap), doesn't really implement a protocol itself (as
>>>>> opposed something like JOSE JWS or XML DSig, which are arguably both
>>>>> formats *and* protocols)
>>>>> >>
>>>>> >>>
>>>>> >>>
>>>>> >>> Doing analysis of an API spec is a slightly unusual activity, it
>>>>> can
>>>>> >>> often lead to conclusions like "if the API is implemented this
>>>>> way.."
>>>>> >>> or "if the application program uses the API like this.." which can
>>>>> >>> seem a bit superficial, but we will aim to produce something
>>>>> concrete
>>>>> >>> output in terms of implementation advice, test cases for
>>>>> >>> implementations, etc.
>>>>> >>>
>>>>> >>>
>>>>> >>> As an example of the kind of things we find, one of the things we
>>>>> were
>>>>> >>> looking at in the spec this morning was padding oracles on key
>>>>> unwrap
>>>>> >>> operations. These are common in implementations of PKCS#11, for
>>>>> >>> example.. Following the current WebCrypto spec, if you were to
>>>>> unwrap a key using
>>>>> >>> AES-CBC or RSA PKCS1v1.5, incorrect padding would lead to
>>>>> "DataError"
>>>>> >>> or " OperationError" respectively. Meanwhile, the error if the
>>>>> >>> ciphertext is correctly padded but the key is too long or too
>>>>> short,
>>>>> >>> the error is "SyntaxError". The fact that these are different
>>>>> *could*
>>>>> >>> be enough to allow a network attacker to obtain the encrypted key
>>>>> by
>>>>> >>> chosen ciphertext attack, which would be relevant say for use case
>>>>> 2.2
>>>>> >>> (Protected Document Exchange).
>>>>> >>
>>>>> >>
>>>>> >> Correct. This is a point of extreme tension within the working
>>>>> group - whether or not Key Wrapping / Unwrapping can provide security
>>>>> guarantees against the host code executing. This was the key of the debate
>>>>> as to whether or not to provide these primitives to begin with - or whether
>>>>> a web application can polyfill them.
>>>>> >>
>>>>> >> Individually, I remainly highly suspicious about this. As a
>>>>> security-minded individual, I can tell you there are dozens of ways to
>>>>> botch this, beyond just algorithm choice. As an editor, I can simply say
>>>>> "Please show more about how this is completely broken", so that the WG can
>>>>> take a closer look about the security guarantees it's attempting to make,
>>>>> and properly evaluate whether or not these APIs belong. I suspect that some
>>>>> members will insist they do, unfortunately, so guidance is welcome.
>>>>> >>
>>>>> >>>
>>>>> >>>
>>>>> >>> As a first step we were planning to look in more detail at the key
>>>>> >>> management subset of the API, but if there are any areas that are
>>>>> of
>>>>> >>> specific concern where you'd like us to take a closer look and you
>>>>> >>> haven't had time please let us know. All feedback welcome.
>>>>> >>>
>>>>> >>> Best,
>>>>> >>>
>>>>> >>> Graham Steel & Kelsey Cairns
>>>>> >>
>>>>> >>
>>>>> >> I think a clear point of use/misuse to examine would be be the
>>>>> issues previously discussed in ISSUE-21 (
>>>>> https://www.w3.org/2012/webcrypto/track/issues/21 ) . The WG had, in
>>>>> the past, discussed requiring SSL/TLS for this API, as well as requiring
>>>>> more active mitigations for scripting issues via CSP (
>>>>> http://lists.w3.org/Archives/Public/public-webcrypto/2012Aug/0230.html). There were and are some strong objections to this.
>>>>> >>
>>>>> >> Since part of your sponsorship includes "implementation advice",
>>>>> and conclusions like "if the application program uses the API like this",
>>>>> it would be interesting to see if INRIA can come up with any proofs of
>>>>> security where the code is delivered over unauthenticated connections (eg:
>>>>> HTTP)
>>>>> >>
>>>>> >> My continued assertion is that this is impossible - messages cannot
>>>>> be authenticated as coming from a user/UA, rather than a MITM. Likewise,
>>>>> under HTTP, a UA/user cannot authenticate messages as coming from the
>>>>> server, rather than a MITM. Encryption/Decryption results cannot be
>>>>> protected from being shared with Mallory, and that there can be no
>>>>> authenticated key exchange without an OOB means. Especially because Mallory
>>>>> can modify the JS operating environment, any proofs of correctness of a
>>>>> protocol go out the window, because the operating environment for those
>>>>> proofs is malleable. In a PKCS#11 world, this would be similar to a
>>>>> "hostile token" that has no pre-provisioned aspects.
>>>>> >
>>>>> >
>>>>> > Ryan is right, of course, that security assertions that can be made
>>>>> if the content is delivered over https cannot be made if he content is
>>>>> delivered over http. However, this does not mean there are no useful
>>>>> security assertions for the case where content is delivered over http. It
>>>>> would be good to have the nature of the assertions which can be made
>>>>> properly investigated and documented.
>>>>> >
>>>>> > Specifically, most of the assertions that can be made for the http
>>>>> case are in the "Trust on First Use" category: if an authentication key is
>>>>> agreed between client and server at time X, then the client can be sure at
>>>>> time Y that they are talking to the same entity they were talking to at
>>>>> time X (which may be A MITM, or may be the intended server, you don't
>>>>> know). Likewise the server can be sure they are talking to the same entity
>>>>> at time Y as they were at time X (which, again, may be either a MITM or may
>>>>> be the client). If you have other reasons to believe there was no MITM at
>>>>> time X, such assertions can be useful.
>>>>> >
>>>>>
>>>>> No, they really aren't.
>>>>>
>>>>> Regardless of how the key got there (and there are plenty of ways to
>>>>> screw that up), the fundamental analysis has to look at whether any of the
>>>>> assertions can be trusted if they are being processed by untrusted code:
>>>>>
>>>>> Sign: Did this message come from Alice or Mallory-injecting-script?
>>>>>
>>>>> Verify: Did this message come from Bob, or Mallory with
>>>>> Mallory-injected script saying Bob
>>>>>
>>>>> Encrypt: Is the Ciphertext sent to Bob the ciphertext that Alice
>>>>> intended, or modified by Mallory?
>>>>>
>>>>> Decrypt: Is the Plaintext processed by Alice what Bob sent in his
>>>>> Ciphertext, or is this Mallory?
>>>>>
>>>>> Wrap: Is this key the actually Alice's key, or is it a key of Mallory?
>>>>>
>>>>> Unwrap: Is the unwrapped key actually what Bob intended, or is it
>>>>> Mallorie's injected?
>>>>>
>>>>> Mallory can also force an unprovisioned state at any time, so you need
>>>>> a way to authenticate that. WebCrypto cannot provide that, such you must
>>>>> rely on side-channels - such as Named Key Discovery or TLS.
>>>>>
>>>>   Yep, as I said, the nature of the assertions which *can* be made
>>>> with http content delivery are of the kind "This message came from the same
>>>> entity (Alice|Mallory) as I agreed keys with at some previous time X."
>>>>
>>>>  Are you disputing this assertion itself, or whether it is useful ?
>>>>
>>>>  ...Mark
>>>>
>>>
>>>   I'm disputing the assertion.
>>>
>>>  "I agreed keys with (Alice|Mallory) at some previous time X" - that
>>> statement is self-containable
>>> "This message came from (Alice|Mallory)"
>>>
>>>  You can't assert at Time Y that the message came from the same party
>>> at Time X, if the *code* used to create that message is delivered
>>> insecurely.
>>>
>>>  The point being you may have agreed upon keys with Alice, but then the
>>> message came from Mallory - because Mallory injected her code to create a
>>> custom message using Alice's credentials.
>>>
>>>  Likewise, when you agree upon keys (at Time X), you can't be sure
>>> whether you're agreeing with Alice or Mallory, unless you're using a secure
>>> transport.
>>>
>>>  So without a secure transport at both Time X and Time Y, you can't be
>>> sure that the party you agreed upon keys with is the same party that
>>> authored the message using those keys. Which is the point.
>>>
>>
>>   One nuance that might be worth noting though:
>>
>>  If you've marked the key with extractable == false, you at least know
>> that you're talking to the same *device* at time Y as at time X.  (Modulo
>> things like key extraction/cloning below the JS layer, which aren't part of
>> our threat model.)
>>
>>  I'm not sure how useful that property is given that there many be
>> Mallory's code running on that device, but...
>>
>>  --Richard
>>
>>
>  No, you don't have that guarantee - as the spec is clear to call out
> that UA's are free to store the key however they want.
>
>  If we're talking only in the context of "Mallory, the remote attacker",
> then sure, you have a guarantee of the same UA,
>
>
>  Which, as it happens, is one of the useful guarantees we are interested
> in for our service, even without anything else.
>
>  ...Mark
>
>    but not necessarily the same origin (or same application code), since
> as you note, Mallory may have postMessage'd the key to herself for later
> use, and can always send requests to the "Victim" at will later.
>
>
>
> --
> Peersm : http://www.peersm.com
> node-Tor : https://www.github.com/Ayms/node-Tor
> GitHub : https://www.github.com/Ayms
>
>

Received on Tuesday, 25 March 2014 16:25:15 UTC