[Prev][Next][Index][Thread]

Re: Missing requirements




In message <9605231337.AA06227@mordred.sware.com>, Charles Watt writes:
> 
> > The real difficulty with datagram-based services is that channel
> > ciphers have state (DES CBC chaining variable, RC4 stream state, etc),
> > and datagram support require either reliable datagrams or a new set of
> > keys per datagram (in an extra header).  To directly modify SSL or PCT
> > to send the data records over UDP by just having in-band or
> > out-of-band key negotiation doesn't work: when each UDP datagram has
> > per-datagram keys/iv in its header, it's very different from either
> > in-band or out-of-band key negotiation.  And if you have reliable,
> > in-order datagrams, you didn't really need any changes.
> 
> This is incorrect -- just look at IPSEC for guidance.  Currently SSL
> really only runs (actual deployed versions) with RC4, a stream cipher.  
> You cannot reuse the key for a stream cipher or you open the door for 
> differential cryptanalysis.  However, it is quite secure to use a block 
> cipher, such as DES-CBC, for multiple datagrams.  New IVs can be exchanged
> easily in clear text header fields.  The chaining is only performed over 
> a single datagram, i.e., the algorithm is re-initialized with each 
> datagram.  There is no need to provide a reliable datagram service.

I was speaking of using the channel ciphers from SSL / PCT as is.
Certainly you can just update the IV (and unlink the packets) for
block ciphers.

Reusing a key for a stream cipher opens you up to a very simple
cryptanalytic attack, *not* differential attacks.  A differential
attack relies on gathering statistics from injecting related pairs of
plaintext blocks for a *block* cipher and examining the resultant pair
of ciphertext.  It requires the ability to mount a (nonadaptive)
chosen plaintext attack, and the goal is to extract enough information
to determine the key used.  See Biham & Shamir's nice book on
differential cryptanalysis.

What you're referring to is the simple fact that the reuse of a stream
key results in the same cipher output stream being xor'd into multiple
plaintext streams to product ciphertext streams.  Thus, the xor of a
pair of those ciphertext streams result in the cancelling of the
cipher output stream and would get you the xor of two plaintext
streams, which would presumably have relatively low entropy and can be
easily analyzed.  No need to determine the key.

This latter attack is a rather old, intro textbook attack and is quite
different from differential cryptanalysis.  It was also the design
flaw in Microsoft's old PWL password cache file format that was widely
discussed in many other fora a few months ago.

> Unless the networking textbooks have been rewritten recently, UDP is a 
> transport layer protocol.  There is no extra complexity required of a
> transport layer security protocol to support UDP, provided that you have 
> designed the protocol properly in the first place.

Transport layer protocols as defined in the ISO OSI reference model
provide reliable virtual channels out of the network layer, which
provides unreliable datagrams.  UDP in the TCP/IP world is simply IP
datagrams with very little extra proessing.  UDP packets may be lost,
reordered, or duplicated, just like the IP packets.

I guess we must have read different textbooks.

> Did you actually read any of my previous messages?  If you have a server
> that has secured its http and telnet servers, but hasn't secured any thing 
> else, such as ftp, rsh, nfs, your Web server is INSECURE.  It is insecure 
> because an attacker can subvert the underlying system by exploiting a hole 
> in one of the non-secured services.  Once in control of the system, it is 
> pretty straight forward to grab control of the Web server's data stream 
> above the transport layer security protocol.

No need for personal animosity.  I read your messages, and maybe you
believe that I misunderstood them or ignored portions of them, but I
_did_ read them.

In any case, you're broaden the picture to securing the systems in
general.  Not unreasonable, but security kernels are hard to do.

For the web server case, most people are careful enough to not run
them with telnet/ftp/rsh/nfs/smtp/etc services enabled, run them as
"expendable" machines the configurations of which can be restored
quickly from a known-to-be-good copy, and/or run them behind a packet
filter to eliminate the non-HTTP packets from the outside.  There is
very little reason for other services to exist.

I'm not saying that our security protocols shouldn't enable the
building of more secure systems; on the contrary, I think it's a good
idea.

Having a security protocol that enables us to make the other
connections secure, however, is no panacea.  Sure, we can make the
ftp/finger/etc ports all send encrypted/MAC'd data, but what does that
do to our interoperability?  Maybe it's a good idea for an enclave of
machines that do not want to talk to any other machines, but it is not
a general solution.  Having Web servers be "expendable" isn't a
general solution either, but it ameliorates a particular problem.

> If you properly split the transport layer security protocol into separate
> key management/authentication and data security components, then you 
> provide implementors with a choice.  They can either:
> 
> A) implement both components in-band within an application library, as both
>   SSL and PCT do now.
> 
> B) implement them separately, perhaps putting the data security component
>    within the protocol stack for stronger system security as does Hannah.
> 
> If the protocols are properly split, the two different implementations
> can be made INTEROPERABLE. This means that when running a sensitive
> application, such as on-line banking, the server could run with a
> Hannah-like implementation to ensure that ALL of its services are 
> protected.  The clients can run with just application support with no
> need to modify their protocol stacks.

I do not disagree with splitting things in principle.

If you want interoperability, then the in-band version and the
out-of-band version are *not* interoperable.  That is necessarily so,
since one is trying to exchange keys in-band, and the other is try to
do it out-of-band.

As I said earlier, an extra negotiation step is required to figure out
whether to do the key exchange in-band or out-of-band.  If we include
this negotiation step, then perhaps the two implementations could be
interoperable.  In either case, of course, a lot of code sharng can
take place.

> As you suggest, the DNS linking approach is a band-aid.  An attacker can
> control or spoof DNS such that they can appear as foobar.com.  They can
> apply to one of the umpteen trusted CA's and get a certificate saying
> they are foobar.com.  Your approach has now guaranteed that the user will 
> be automatically fooled.  This is not speculation.  This attack has already 
> been demonstrated.

Band-aids are useful when you have an owwwie; they're cheaper than a
limb transplant.  It makes the job of the attacker harder; just as
requiring multiple signatures on some legal documents makes their
forgery harder, but still not impossible.

If these upteen trusted CAs' certification policies require no
checking at all and rely soley on the DNS name, -I- certainly am not
going to trust those CAs.  This is a CA policy question and I don't
think anybody is suggesting that we'd just take certificates from any
fly-by-night outfit who says they're being a CA for the week.

> As far as grabbing the user's credit card number goes, that is not a 
> problem that is going to be solved by a transport layer security protocol, 
> for the TLSP cannot know whether the server is authorized to view such 
> an item -- to reduce fraud, only the banks should have access to the #,
> not the merchants.  Only an application layer protocol, such as SET, along 
> with a supporting public key infrastructure, will adequately protect credit 
> card numbers.

Again, I agree that in principle that something like SET is the right
way to go for credit card transactions.  Unfortunately, one of the
goals of having a transport level protocol is to enable faster
deployment (than e.g. IPSEC), and also be a more general solution.
Maybe SET-enabled browsers will be widely available -- I believe
Netscape had announced something along these lines.  Credit card
numbers, however, are not the only things that should be private, and
SET is not general enough to protect other forms of private
communications.

> > The issue does *not* boil down to trusting the CA.  It's more complex
> > than that and requires a more holistic approach -- [ ... ]
> 
> Of course it boils down to trusting the CA.  If you cannot trust the CA
> to correctly match an entity with its identification information in 
> whatever form that information may take, then the system cannot provide
> adequate assurance for authentication.
> 
> You are correct, however, that there is more to the problem than just
> trusting the CA, and that one important component is assisting the user.
> But I suspect that this process can never be totally automated, for only
> the user truly knows who it is the user is trying to contact.  A browser
> can make an educated guess, but it is still a guess.  As a real-life
> example,  [ ... ]

When I hear somebody say that something boils down to X, I take it to
mean that X is the only essential thing.  You seem to use it in a
different way: it does boil down to trusting CAs, but there -is- more
to the problem than just trusting the CAs.  Obviously we have
communication problems.

I've been claiming that the security problem is multifaceted, that
trusting the CA is but one problem that needs to be solved -- and thus
it does not "boil down" to trusting the CA -- it is not the only thing
of essense.  Maybe we don't actually differ here on this point.

-bsy

--------
Bennet S. Yee		Phone: +1 619 534 4614	    Email: bsy@cs.ucsd.edu

Web:	http://www-cse.ucsd.edu/users/bsy/
USPS:	Dept of Comp Sci and Eng, 0114, UC San Diego, La Jolla, CA 92093-0114


Follow-Ups: References: