RE: Early Draft Algorithms Section

&& See comments at &&

-----Original Message-----
From: Joseph Ashwood [mailto:jashwood@arcot.com]
Sent: Wednesday, May 16, 2001 3:13 PM
To: Public XML Encryption List
Cc: Donald Eastlake 3rd
Subject: Re: Early Draft Algorithms Section

----- Original Message -----
From: "Eastlake III Donald-LDE008" <Donald.Eastlake@motorola.com>
...

> %% It is my impression that RC4 is fast and BBS (by which I assume you mean
> %% Blum Blum Shub) has interesting provable security properties.  Is ISAAC
> %% really enough different from RC4 to bother including?

Lately it has become evident that maybe it is. There is a problem with RC4
that is proving to be a more interesting problem than was previously
thought. There's a bias in the output where stream[i] == stream[i+1] with
probability greater than 1/256 (it actually shows every 1/256 + 1/2^24).
This pad determination is proving to be more problamatic and may result in
an attack. There are not even any known biases greater than 1/2^32 in ISAAC.
I think ISAAC is worth using.

&& Does anyone else in the WG have an opinion on including a stream cipher?

> %% In any case, while this is something for the working group to decide,
> %% I don't know that adding any stream ciphers is actually worth the effort
> %% and specification bulk...

I think they are worth specifying. By specifying ciphers we are selecting
which ones will be used, if we are going to specify any ciphers we need to
specify enough that this will be used by many people.

&& By specifying optional ciphers, we are only biasing what will be used.
&& Anyone can generate their own URI identifier and use whatever algorithm
&& they want.

> > ## Some people like orthogonally specifying the different sub-algorithms
> > ## that go together to make up a suite and some think that opens holes
> > ## and gives opportunities for improper use.  In XML DSIG, we ended up
> > ## with a consensus for unified algorithm URIs, which bundled
> > ## together the hash, padding, and public key algorithm in a single
> > ## signature algorithm identifier.
>
> With the large selection of ciphers that is available, I don't think having
> a unified list is appropriate. Just as a beginning Crypto++ has 39 symmetric
> key algorihms, most of which will almost certainly be used with XML Enc in
> some form. There are also 14 hash functions. Using a unified selection gives
> 546 (39*14) entries just in the selection process, without even counting the
> unkeyed MAC functions. Going with a 2 phase setup, one for authenticator,
> one for encryptor results in 53 entries (again without MAC functions), a
> savings of over 90% in space.
>
> %% Well, if you look in Schneier's book I'm sure you can find hundreds of
> %% algorithms, but so what?  The specification is extensible so that people
> %% who want to use rare, proprietary, or bizarre algorithms certainly can.
> %% And that may be appropriate for some organizations that are
> %% operating in a closed environment.  I consider a primary goal to be
> %% interoperability so you really want to encourage a minimal diversity of
> %% algorithms. If the parties can negotiate, then a profusion of algorithms
> %% just encourages breaks by negotiation to the least secure.  If the parties
> %% can't negotiate, then a profusion of algorithms either breaks
> %% interoperability or puts an enormous burden, especially on low end limited
> %% memory and processing devices, to implement all these algorithms.  I, for
> %% one, just do not believe there is any good reason to provide for 39
> %% symmetric algorithms.  2 or 3 well chosen strong algorithms are a much
> %% better idea from the viewpoint of the goals of interoperability and
> %% widespread implementation, including lower end devices.

Those are the same arguments that are given about S/MIME. And is exactly the
reason that it is primarily limited to the use RC2-40 the vast majority of
the time.

&& I would have said that was due to export restrictions early on, even though
&& those no longer are as stringent.  The mail environment is one in which
&& negotiation is frequently not available so it makes sense to use only the
&& algorithms which the recipient is required to implement, i.e., which are
&& mandatory in the standard.  There are other environments in which
&& negotiation is the norm.  I think XML Encryption will be used in a wide
&& variety of environments.  Where negotiation is not possible, only the
&& mandatory to implement algorithms are guaranteed interoperability, unless
&& you have some sort of closed system agreement...

We can try to select strong algorithms, but we will generally fail. 3DES is
a perfect example, it's 168-bit cryptography, it's now only 2^90 work to
break it, just barely acceptable. Rijndael is believed to be the weakest of
the 5 AES finalists, and the likelihood of it being broken in the next 10
years is fairly high. If we're going to choose "strong" algorithms we need
to consider the future resilience, something that does not come to mind when
talking about 3DES and Rijndael. By restricting ourselves to a small number
of well chosen algorithms we immediately restrict ourselves to the primary
target list. The person that breaks 3DES or Rijndael will gain reputation
immediately, and neither of them is looking particularly long in the legs at
this point, and 3DES may be broken simply by the march of technology.

&& I guess we have different assessments of these things.  I do not believe there
&& is any fielded system in which 3DES would be the weakest link and I do not
&& think that situation will change in the next decade or more.  Given the
&& nature of the AES selection process, I do not believe that AES will be
&& be broken by more than a few orders of magnitude in effort in the next
&& ten years.  But maybe I'll be proved wrong.

&& Given the WG consensus so far that 3DES and AES should be mandatory to
&& implement and a desire to avoid code bloat, what would you think about
&& defining an algorithm that compounded DES and AES?

> %% But I guess the above does not really speak directly to the question of
> %% orthogonality. We could go that way.  The integrity hash algorithm
> %% specification could be moved out as per correspondence with Amir.  For
> %% the Key Transport and Symmetric Key Wrap algorithms, the type of key
> %% wrapped or transported could be specified by the Type attribute of the
> %% EncryptedKey element, etc.  Hopefully others on the WG will indicate
> %% some preference on this.

I personally believe that would decrease the size of the specification, not
just if we supply the 39 algorithms given by crypto++ but even once we go
beyond 2 symmetric algorithms and 2 integrity algorithms. At the very least
I would like to see MARS, Twofish, Serpent, and RC6. As the other AES
finalists I believe they should each have a position in the next standard,
primarily because of their strength (only Rijndael had any doubts about it's
strength by the general cryptographic community), but also because having
diversity at the super strong level is a very good idea.

&& Of those, Twofish I believe is unrestricted. A number of the others are,
&& I believe, encumbered technology, which should be avoided.

[snip discussion that I believe was heading down the wrong path regarding
Rijndael/AES 128/192/256 or 128]
Let's try a different direction on this. The 192 and 256-bit versions of
Rijndael is stronger than the 128-bit version in more than one way. Not only
are there more bits in the key, but also more rounds are used, making them
more secure than 128-bit Rijndael against almost every attack (boomerang,
slide and a few other attacks withstanding). The extra security offered
offers not just extra resistance to brute force, but also better resistance
to other attack, regardless of how many ciphers we choose, be it 3 or 300, I
believe it would be beneficial to add at the very least 192 and 256-bit
versions of Rijndael/AES.

&& OK, unless others on the WG object.

...

> I'd suggest Panama,

I would not recommend the use of the Panama hash, it is broken, I don't know
why I put it in the list.

> Tiger, RIPEMD, HAVAL instead. It's not that I don't like
> them, it's that reliance on algorithms that have been submitted for public
> review before being standardized is not the most reasonable thing to do. The

&& I think you mean to say "not submitted".

> extended SHA series is relatively new, and relatively unstudied so it should
> not be trusted for anything critical, by binding encryption algorithms to
> hash functions we restrict ourselves even further in this direction.
>
> %% Well, all you said before was that you thought they were too slow.  Now
> %% you also say they are too green... I don't want to see five hash algorithms
> %% specified.  Personally I feel confident that the SHA series represents
> %% an honest and successful effort by NSA and the US government to produce
> %% secure hash functions.  Others may differ.

These are the same people that said SKIPJACK offered 80-bits worth of
protection and were genuinely surprised when an attack was found reducing it
to 76-bits worth of work. The same people that had to replace SHA with SHA-1
when an attack was found by the public. The same people who were behind the
public with ECC for several years. I don't completely trust them to design
my hashes any more. I trust them more than I would trust most people, but I
believe better hashes can be built.

> %% If one new hash algorithm
> %% were to be added, which one would you suggest?

If I had to pick just one has to be added I would suggest RIPEMD-256, and
for a new class the entire RIPEMD series. That would serve two very
important purposes in my opinion. First and foremost it would introduce
diversity into the integrity specification. The second function would be to
remove the US centricity from the integrity functions, this will make it
more likely to be adopted in Europe where the RIPEMD series is very popular.

&& It is my understanding that RIPEMD-256 and RIPEMD-320, while they
&& provide more hash bits, are in fact not designed to actually be any
&& stronger than the 128 and 160 bit RIPEMD hashes.

> FIPS is the authoritative document.
> %%You can't implement 3DES from the FIPS.

Actually FIPS 46-3 gives precise details on the construction, and details
exactly how it is done, beginning on the bottom of page 15. It describes it
in terms of DES which was described in the previous several pages.

&& You're right.  So much for the efficacy of skimming a revised document
&& even if you are familiar with the original...

> %% The consensus of the WG has been that DH key agreement should be
> %% optional. To make it Recommended or Mandatory would, I think, require
> %% evidence of a new consensus.

I think it should remain optional to implement, but specifying it is
mandatory, if for no other reason than to show how to link key agreement
into the system.

&& That is it's current status.  It is in the specification of the standard.
&& It is specified as Optional, which means optional to implement.

> %% I'm not sure that everyone involved in S/MIME and CMS would agree that
> %% with your assessment that they are not crypto people.  A lot of their
> %% stuff (which, by the way, I was not involved with designing) looks
> %% pretty kludgy, but I would not asses it, as you apparently do, as being
> %% insecure.

Take for example the Key Checksum. Being only 64-bits, it will take only
2^32 work to find a collision. 2^32 ~= 4billion. A 1GHz machine is not
uncommon, and it would only take that machine 4 seconds to count to 2^32.
Calculating SHA-1 of something doesn't take much effort a few hundred clock
cycles is not unheard of, so a few thousand seconds or a couple of hours. I
don't think the severe truncation of SHA-1 is beneficial to the security of
CMC Key Checksum.

&& I was not involved and do not understand the design choices in the CMS
&& symmetric key wrap functions but I believe CMS obtains authentication
&& by other means, using digital signatures.  The Key Checksum looks like
&& something to whiten the results of the first encryption as input to the
&& super encryption and provide a simple check to avoid wasting decryption
&& effort if you are off on the key encrypting key.

> > ## The lack of choice of chaining modes is quite
> > ## deliberate, with the intent in the current draft of restricting
> > ## to only one.

> %% Adding chaining modes is like adding algorithms... it either breaks
> %% interoperability or bulks up implementations everywhere.

Well then to resolve this would you feel comfortable explicitly allowing for
alternate modes, but only specifying CBC? That would at least allow the
option of alternate implementations. I do believe that CBC is the best all
around mode, and since we already have the integrity checks there no real
reason for the more exotic modes in most circumstances. I think the first
general purpose mode that will be commonly added is Counter mode, not
because it offers any great security, but because so many people like it (I
don't personally like it, but I realize that it is becoming the mode du
jour).

&& I assume what you mean is orthogonally specifying the chaining mode.
&& With our current syntax, I think that would mean an element content
&& of EncryptionMethod.  So you would have something like
&&	<EcnryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes">
&&		<Chaining>Counter</Chaining>
&&	</EncryptionMethod>
&& Seems simpler to just have http://www.w3.org/2001/04/xmlenc#aes-cbc
&& and http://www.w3.org/2001/04/xmlenc#aes-counter URIs. Even if there
&& are 4 or 5 algorithms and 3 or 4 chaining modes, your talking a pretty
&& small table here.

                Joe

&& Thanks,
&& Donald

Received on Thursday, 17 May 2001 10:59:54 UTC