W3C home > Mailing lists > Public > public-webcrypto@w3.org > April 2013

Re: Defaults: Getting concrete (round 2)

From: Ryan Sleevi <sleevi@google.com>
Date: Mon, 22 Apr 2013 16:47:55 -0700
Message-ID: <CACvaWvaE_wz0E4r0x8XmaE1XUUzVhMAkxAKE=oDi=1f+mmX7Ww@mail.gmail.com>
To: Richard Barnes <rbarnes@bbn.com>
Cc: Wan-Teh Chang <wtc@google.com>, Web Cryptography Working Group <public-webcrypto@w3.org>
On Mon, Apr 22, 2013 at 4:36 PM, Richard Barnes <rbarnes@bbn.com> wrote:
>
> On Apr 22, 2013, at 2:09 PM, Ryan Sleevi <sleevi@google.com> wrote:
>
>> On Sun, Apr 21, 2013 at 6:49 PM, Richard Barnes <rbarnes@bbn.com> wrote:
>>>
>>> On Apr 19, 2013, at 6:47 PM, Ryan Sleevi <sleevi@google.com> wrote:
>>>
>>>> On Fri, Apr 19, 2013 at 3:16 PM, Wan-Teh Chang <wtc@google.com> wrote:
>>>>> On Thu, Apr 18, 2013 at 11:14 AM, Richard Barnes <rbarnes@bbn.com> wrote:
>>>>>>
>>>>>> I agree that there are lots of protocols that have defined ways to shove things into the counter and IV fields for CTR and GCM.  They can always override the default.
>>>>>>
>>>>>> I'm more concerned about newer protocols that haven't done something similar (and probably don't need to).  Those protocols just need something that meets the security requirements, and it's easy enough for the UA to provide that.
>>>>>>
>>>>>> We've also seen that application designers can also get counter/IV generation badly wrong, as with the recent nonce reuse issue in JOSE:
>>>>>> <http://www.ietf.org/mail-archive/web/jose/current/msg01967.html>
>>>>>>
>>>>>> So while you're right that there are protocols that will not make use of the default, I think that newer things can benefit from having a safe default here.
>>>>>
>>>>> 1. Let's first consider the counter field for the CTR mode.
>>>>>
>>>>> Unless the UA knows about all the CTR mode encryptions that have been
>>>>> done with the key in question, the UA cannot generate a new counter
>>>>> value that hasn't been used before.
>>>>>
>>>>> This requires the UA to be the exclusive user of the key in question.
>>>>> But if the API allows the key to be exported, the UA won't be the
>>>>> exclusive user of the key.
>>>>
>>>> Or a new key imported that has been used previously.
>>>>
>>>> I definitely don't think implementations should be trying to track
>>>> what the 'used' counters are - that's certainly the realm and
>>>> responsibility of a high-level protocol, and no API does this.
>>>
>>> Obviously, in the fully general case, there's no way the UA can guarantee uniqueness.
>>>
>>> There is one clear case where the UA knows for sure the entire set of counters that has been used, namely for non-exportable keys generated by the UA.  Likewise for exportable keys that have not been exported (which the UA knows).
>>
>> No, this is not at all a realistic statement.
>>
>> No crypto library tracks IVs. When you think of the space of possible
>> IVs (2^128), it's entirely unreasonable to suggest that UAs should.
>> Otherwise, you've implemented a trivial DoS. Heck, you've implemented
>> arbitrary storage, simply by allowing a web application to flip bits
>> in IV consumption.
>>
>> Given how much push back there was simply to track *algorithm* usage,
>> I'm truly surprised that a discussion about tracking *IVs* is being
>> entertained.
>
>
> Holy straw man, batman!  The proposal was not for the API to track every IV used ever, but to keep something like a message counter for generating message nonces. Again, looking at SP800-38A, B.2:
> """
> A second approach to satisfying the uniqueness property across messages is to assign to each message a unique string of b/2 bits (rounding up, if b is odd), in other words, a message nonce, and to incorporate the message nonce into every counter block for the message. ... A procedure should be established to ensure the uniqueness of the message nonces.
> """
>
> Really, the only reason that we're talking about anything other than generate a random string is that the language here is written in terms of absolute uniqueness, where as 800-38D (GCM) is in terms of probabilistic uniqueness.  CTR mode could meet the GCM requirements for IVs completely statelessly.
>
>
>> And to what end? The security assurances provided by such a solution
>> are dwarfed by the complexity of implementation and the security risks
>> therein.
>
> Again, trivial complexity here.  For CBC/OFB/GCM, it's just a call to getRandomValues.  For CTR, the stupendous complexity is something like:
>
> function generate_ctr(key) {
>   key.messageCount++;
>   return MSB(96, PRF(key.messageCount + localSalt));
> }
>
> Does that address all use cases?  Of course not.  But it covers the case where the key is only used to encrypt locally.  If protocols want to do something fancier, they can generate their own IVs.
>

And there it is. "is only used to encrypt locally".

I again re-iterate that we should not be trying to shovel these use
cases into the low-level API for the added complexity.

As you note, this description of CTR is ONLY useful in describing a
protocol ('for encrypting locally'). In the case of any multi-party
CTR protocol, the caller will exclusively and explicitly be setting
the CTR according to the underlying protocol being implemented.

This "simplification" is an attempt to integrate protocol-level
semantics into an API. I do not think that is at all appropriate.

>
>
>> I again propose that we drop this topic.
>>
>>>
>>> As Wan-Teh points out, an RBG-based CTR approach can offer very low probability of counter reuse.  For a 32-bit counter / 96-bit nonce, the probably of nonce re-use would be 2^-96 as long as no single encryption processed more than 16GB.  It wouldn't be strictly FIPS compliant, but it would be practically there.  And if apps care, they can generate their own IVs to ensure uniqueness.
>>
>> We should NOT be baking cryptographic protocols into the low-level API
>> - which is exactly what this is.
>>
>>>
>>> I would posit that these use cases -- UA-generated keys and encryptions <16GB -- cover a broad enough swathe of likely usages that they are worth addressing.
>>
>> Strongly disagree here, for the reasons above.
>>
>>>
>>>
>>>>> 2. As to the IV field for GCM, I think the UA can use the RBG-based
>>>>> construction of the IV in Section 8.2.2 of NIST SP 800-38D. I believe
>>>>> this is what you proposed.
>>>>>
>>>>> 3. This makes me wonder if an RBG-based construction of the counter
>>>>> field for the CTR mode would also be acceptable if the probability of
>>>>> reusing a counter value is low enough.
>>>>>
>>>>> Wan-Teh
>>>>>
>>>>
>>>> I think the inconsistency argument should be the one we're looking at here.
>>>>
>>>> The argument for having the UA generate the IV is not one being made
>>>> on technical requirements, but simply on the basis that "People (may)
>>>> use it incorrectly."
>>>
>>> I guess our disagreement is on the risk = likelihood * impact calculation for IV re-use.  I'm claiming that the likelihood of a non-crypto-expert developer re-using an IV is non-negligible, and the impact is likely to be severe.  You are apparently claiming that either or both of these factors is effectively zero.  That doesn't really seem plausible to me.
>>
>> I suggest your logic is fundamentally flawed by attempting to design
>> for the non-crypto-expert here. We've repeatedly had this conversation
>> - particularly whenever defaults are brought up. We've repeatedly
>> assessed that "no API can serve two masters" - attempting to split the
>> API like that only serves to make a worse API (which I think the
>> discussion on this thread clearly shows how wildly inconsistent and
>> unpredictable it becomes).
>>
>> Again, I'm extremely sympathetic to the non-crypto-expert here - but I
>> don't think it's at all reasonable or well-thought-out to try to
>> shoe-horn it in as a last minute design consideration. These are,
>> again, the exact same points that were raised the last time we
>> discussed defaults - and from a number of people, not just me.
>>
>>>
>>>
>>>> The fact that the IV needs to be protected
>>>> (outside of the AEAD case) should be a compelling reason enough for us
>>>> to suggest it's a false argument being presented.
>>>
>>> Could you clarify in what sense the IV needs to be protected?  I assume you don't mean confidentiality protection [1].  And in any case, I don't really see how that bears on how the IV is generated.
>>
>> Integrity, not confidentiality. The fundamental issue of the
>> "Cryptographic Doom Principle".
>
> This not being a phrase with which I was familiar, I googled it and found Moxie's description...
> <http://www.thoughtcrime.org/blog/the-cryptographic-doom-principle/>
> ... and noted that it has nothing to do with IVs.  Moxie's point is that you need to integrity-protect ciphertext to proven padding-oracle attacks.
>
> If you have some serious analysis to present here, it would actually be very helpful on the CFRG list, for JOSE.
> <http://www.ietf.org/mail-archive/web/cfrg/current/msg03381.html>

While the term itself is frequently used by Moxie, surely you realize
the issue here - and Moxie's paper/preso equally spells it out quite
clearly.

In the case of non-AEAD mechanisms (eg: those WITHOUT an inbuilt MAC),
the ability to influence the IV is equivalent to the ability to
influence the resulting plaintext. In the case of CTR, it's the same
as flipping bits in the result. In the case of CBC, it affects
block+1.

I have no interest in trying to solve JOSE's cryptographic design
issues - if JOSE doesn't have cryptographers reviewing it, that's an
issue for JOSE.

However, the point is that for any non-AEAD algorithm, the IV MUST be
protected by a MAC if the encryption is to have any semblance of
integrity protection. For that reason, the application developer will
ALWAYS need the IV (and the resulting ciphertext) in order to apply a
MIC/MAC so that the resulting, decrypted ciphertext can be reflected.

The doom principle applies because in the mac-then-encrypt, you can
never protect the IV - which is part of why mac-then-encrypt is so
flawed.

It's the same as allowing an attacker to supply the IV and exploit the
algorithmic weaknesses.

So the net savings of one line of code is hardly worth the added
overhead AND it yields a false sense of cryptographic protocol
security - because generating the IV is NOT the most important part of
the operation.

Again, I feel like we're running in circles here trying to "protect
the little guy", at the expense of continuing to propose inconsistent
APIs that are full of edge cases or implementation complexity, and
with little *actual* security benefit to show for it.

If you want strong security guarantees, the *low-level crypto* is NOT
going to be what the attacker will exploit. No matter how easy or hard
it is.

>
>
>> You're arguing that IV generation prevents footguns by some measurable
>> sense. I'm repeatedly asserting that this is demonstrably not the case
>> - and that whatever incremental value derived from trying to do so is
>> vastly eclipsed by both the implementation and cognitive complexity
>> from the wildly inconsistent API needed to service this.
>
> As I've argued above, the implementation complexity is trivial, random generation plus maybe a counter.  It's not clear to me how you think this is inconsistent, given that there are already defaults for some parameters already (e.g. tagLength).  The only difference is that these are reset per operation.
>
> --Richard
>
>
>
>
>>
>>>
>>> --Richard
>>>
>>>
>>>
>>> [1] From SP 800-38A: "The IV need not be secret, so the IV, or information sufficient to determine the IV, may be
>>> transmitted with the ciphertext."
>>>
>>>
>>>
>>>
>>>
>
Received on Monday, 22 April 2013 23:48:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:16 UTC