Re: Early Draft Algorithms Section

----- Original Message -----
From: "Eastlake III Donald-LDE008" <Donald.Eastlake@motorola.com>
> %% What's the difference between a common and an uncommon stream cipher?

A common stream cipher simply generates a pad stream to be generated, an
uncommon stream cipher has the freedom to supply a different combiner, to
use feedback from the plaintext, to do many things that the more constrained
common stream cipher is not allowed to do.

> To this end I propose the addition of 4 algorithms that fall into
> the category of stream ciphers:
> OTP with XOR as a combiner (required)
> ARCFOUR/ARC4/RC4 with XOR as a combiner (recommended)
> BBS with XOR as a combiner (optional)
> ISAAC with XOR as a combiner (optional)
>
> %% What is the licensing status of RC4? I think we would like to avoid
> %% encumbered technology.

The licensing of the name is restricted, the algorithm is common knowledge
and is typically referred to as either ARCFOUR or ARC4. As long as we call
it ARCFOUR or ARC4 there is nothing they can do, but RSA Labs does have he
trademark on the RC4 name.

> %% I must admit I had never considered one time pads (which I assume is
what
> %% your mean by OTP) as an algorithm.

Yes that is what I meant. Based on your other comments maybe we could call
it something akin to Raw Pad Combination, Pad-based encryption, I'm sure
someone can think of better alternatives.

> %% OTP seems
> %% inappropriate and I would oppose including it.

>
> %% It is my impression that RC4 is fast and BBS (by which I assume you
mean
> %% Blum Blum Shub) has interesting provable security properties.  Is ISAAC
> %% really enough different from RC4 to bother including?

Lately it has become evident that maybe it is. There is a problem with RC4
that is proving to be a more interesting problem than was previously
thought. There's a bias in the output where stream[i] == stream[i+1] with
probability greater than 1/256 (it actually shows every 1/256 + 1/2^24).
This pad determination is proving to be more problamatic and may result in
an attack. There are not even any known biases greater than 1/2^32 in ISAAC.
I think ISAAC is worth using.

>
> %% In any case, while this is something for the working group to decide,
> %% I don't know that adding any stream ciphers is actually worth the
effort
> %% and specification bulk...

I think they are worth specifying. By specifying ciphers we are selecting
which ones will be used, if we are going to specify any ciphers we need to
specify enough that this will be used by many people.

>
> > ## Some people like orthogonally specifying the different sub-algorithms
> > ## that go together to make up a suite and some think that opens holes
> > ## and gives opportunities for improper use.  In XML DSIG, we ended up
> > ## with a consensus for unified algorithm URIs, which bundled
> > ## together the hash, padding, and public key algorithm in a single
> > ## signature algorithm identifier.
>
> With the large selection of ciphers that is available, I don't think
having
> a unified list is appropriate. Just as a beginning Crypto++ has 39
symmetric
> key algorihms, most of which will almost certainly be used with XML Enc in
> some form. There are also 14 hash functions. Using a unified selection
gives
> 546 (39*14) entries just in the selection process, without even counting
the
> unkeyed MAC functions. Going with a 2 phase setup, one for authenticator,
> one for encryptor results in 53 entries (again without MAC functions), a
> savings of over 90% in space.
>
> %% Well, if you look in Schneier's book I'm sure you can find hundreds of
> %% algorithms, but so what?  The specification is extensible so that
people
> %% who want to use rare, proprietary, or bizarre algorithms certainly can.
> %% And that may be appropriate for some organizations that are
> %% operating in a closed environment.  I consider a primary goal to be
> %% interoperability so you really want to encourage a minimal diversity of
> %% algorithms. If the parties can negotiate, then a profusion of
algorithms
> %% just encourages breaks by negotiation to the least secure.  If the
parties
> %% can't negotiate, then a profusion of algorithms either breaks
> %% interoperability or puts an enormous burden, especially on low end
limited
> %% memory and processing devices, to implement all these algorithms.  I,
for
> %% one, just do not believe there is any good reason to provide for 39
> %% symmetric algorithms.  2 or 3 well chosen strong algorithms are a much
> %% better idea from the viewpoint of the goals of interoperability and
> %% widespread implementation, including lower end devices.

Those are the same arguments that are given about S/MIME. And is exactly the
reason that it is primarily limited to the use RC2-40 the vast majority of
the time.

We can try to select strong algorithms, but we will generally fail. 3DES is
a perfect example, it's 168-bit cryptography, it's now only 2^90 work to
break it, just barely acceptable. Rijndael is believed to be the weakest of
the 5 AES finalists, and the likelihood of it being broken in the next 10
years is fairly high. If we're going to choose "strong" algorithms we need
to consider the future resilience, something that does not come to mind when
talking about 3DES and Rijndael. By restricting ourselves to a small number
of well chosen algorithms we immediately restrict ourselves to the primary
target list. The person that breaks 3DES or Rijndael will gain reputation
immediately, and neither of them is looking particularly long in the legs at
this point, and 3DES may be broken simply by the march of technology.

>
> %% But I guess the above does not really speak directly to the question of
> %% orthogonality. We could go that way.  The integrity hash algorithm
> %% specification could be moved out as per correspondence with Amir.  For
> %% the Key Transport and Symmetric Key Wrap algorithms, the type of key
> %% wrapped or transported could be specified by the Type attribute of the
> %% EncryptedKey element, etc.  Hopefully others on the WG will indicate
> %% some preference on this.

I personally believe that would decrease the size of the specification, not
just if we supply the 39 algorithms given by crypto++ but even once we go
beyond 2 symmetric algorithms and 2 integrity algorithms. At the very least
I would like to see MARS, Twofish, Serpent, and RC6. As the other AES
finalists I believe they should each have a position in the next standard,
primarily because of their strength (only Rijndael had any doubts about it's
strength by the general cryptographic community), but also because having
diversity at the super strong level is a very good idea.

[snip discussion that I believe was heading down the wrong path regarding
Rijndael/AES 128/192/256 or 128]
Let's try a different direction on this. The 192 and 256-bit versions of
Rijndael is stronger than the 128-bit version in more than one way. Not only
are there more bits in the key, but also more rounds are used, making them
more secure than 128-bit Rijndael against almost every attack (boomerang,
slide and a few other attacks withstanding). The extra security offered
offers not just extra resistance to brute force, but also better resistance
to other attack, regardless of how many ciphers we choose, be it 3 or 300, I
believe it would be beneficial to add at the very least 192 and 256-bit
versions of Rijndael/AES.

> I'd skip the SHA-384 usage for the same reason that I wouldn't want
SHA-256
> truncated to 224 bits. SHA-384 is SHA-512 with the last bits removed. In
>
> %% I'm aware of that and actually agree with you here but only because we
> %% are operating in the verbose XML environment.  There are other
environments
> %% in which every bit counts and it makes perfect sense to truncate hash
> %% values down to just the strength you need.

I think we're in enough agreement on this to stop the discussion on this
portion.


> I'd suggest Panama,

I would not recommend the use of the Panama hash, it is broken, I don't know
why I put it in the list.

> Tiger, RIPEMD, HAVAL instead. It's not that I don't like
> them, it's that reliance on algorithms that have been submitted for public
> review before being standardized is not the most reasonable thing to do.
The
> extended SHA series is relatively new, and relatively unstudied so it
should
> not be trusted for anything critical, by binding encryption algorithms to
> hash functions we restrict ourselves even further in this direction.
>
> %% Well, all you said before was that you thought they were too slow.  Now
> %% you also say they are too green... I don't want to see five hash
algorithms
> %% specified.  Personally I feel confident that the SHA series represents
> %% an honest and successful effort by NSA and the US government to produce
> %% secure hash functions.  Others may differ.

These are the same people that said SKIPJACK offered 80-bits worth of
protection and were genuinely surprised when an attack was found reducing it
to 76-bits worth of work. The same people that had to replace SHA with SHA-1
when an attack was found by the public. The same people who were behind the
public with ECC for several years. I don't completely trust them to design
my hashes any more. I trust them more than I would trust most people, but I
believe better hashes can be built.

> %% If one new hash algorithm
> %% were to be added, which one would you suggest?

If I had to pick just one has to be added I would suggest RIPEMD-256, and
for a new class the entire RIPEMD series. That would serve two very
important purposes in my opinion. First and foremost it would introduce
diversity into the integrity specification. The second function would be to
remove the US centricity from the integrity functions, this will make it
more likely to be adopted in Europe where the RIPEMD series is very popular.

> FIPS is the authoritative document.
> %%You can't implement 3DES from the FIPS.

Actually FIPS 46-3 gives precise details on the construction, and details
exactly how it is done, beginning on the bottom of page 15. It describes it
in terms of DES which was described in the previous several pages.

> %% The consensus of the WG has been that DH key agreement should be
> %% optional. To make it Recommended or Mandatory would, I think, require
> %% evidence of a new consensus.

I think it should remain optional to implement, but specifying it is
mandatory, if for no other reason than to show how to link key agreement
into the system.

> %% I'm not sure that everyone involved in S/MIME and CMS would agree that
> %% with your assessment that they are not crypto people.  A lot of their
> %% stuff (which, by the way, I was not involved with designing) looks
> %% pretty kludgy, but I would not asses it, as you apparently do, as being
> %% insecure.

Take for example the Key Checksum. Being only 64-bits, it will take only
2^32 work to find a collision. 2^32 ~= 4billion. A 1GHz machine is not
uncommon, and it would only take that machine 4 seconds to count to 2^32.
Calculating SHA-1 of something doesn't take much effort a few hundred clock
cycles is not unheard of, so a few thousand seconds or a couple of hours. I
don't think the severe truncation of SHA-1 is beneficial to the security of
CMC Key Checksum.

> > ## The lack of choice of chaining modes is quite
> > ## deliberate, with the intent in the current draft of restricting
> > ## to only one.

> %% Adding chaining modes is like adding algorithms... it either breaks
> %% interoperability or bulks up implementations everywhere.

Well then to resolve this would you feel comfortable explicitly allowing for
alternate modes, but only specifying CBC? That would at least allow the
option of alternate implementations. I do believe that CBC is the best all
around mode, and since we already have the integrity checks there no real
reason for the more exotic modes in most circumstances. I think the first
general purpose mode that will be commonly added is Counter mode, not
because it offers any great security, but because so many people like it (I
don't personally like it, but I realize that it is becoming the mode du
jour).
                Joe

Received on Wednesday, 16 May 2001 15:34:30 UTC