[Prev][Next][Index][Thread]

Re: Missing requirements



-----BEGIN PRIVACY-ENHANCED MESSAGE-----
Proc-Type: 4,MIC-CLEAR
Content-Domain: RFC822
Originator-Certificate:
 MIIBvzCCAWkCEFmOln6ip0w49CuyWr9vDVUwDQYJKoZIhvcNAQECBQAwWTELMAkG
 A1UEBhMCVVMxGDAWBgNVBAoTD1NlY3VyZVdhcmUgSW5jLjEXMBUGA1UECxMOU2Vj
 dXJlV2FyZSBQQ0ExFzAVBgNVBAsTDkVuZ2luZWVyaW5nIENBMB4XDTk1MDUwODIw
 MjMzNVoXDTk3MDUwNzIwMjMzNVowcDELMAkGA1UEBhMCVVMxGDAWBgNVBAoTD1Nl
 Y3VyZVdhcmUgSW5jLjEXMBUGA1UECxMOU2VjdXJlV2FyZSBQQ0ExFzAVBgNVBAsT
 DkVuZ2luZWVyaW5nIENBMRUwEwYDVQQDEwxDaGFybGVzIFdhdHQwWTAKBgRVCAEB
 AgICBANLADBIAkEM2ZSp7b6eqDqK5RbPFpd6DGSLjbpHOZU07pUcdgJXiduj9Ytf
 1rsmf/adaplQr+X5FeoIdT/bVSv2MUi3gY0eFwIDAQABMA0GCSqGSIb3DQEBAgUA
 A0EApEjzeBjiSnGImJXgeY1K8HWSufpJ2DpLBF7DYqqIVAX9H7gmfOJhfeGEYVjK
 aTxjgASxqHhzkx7PkOnL4JrN+Q==
MIC-Info: RSA-MD5,RSA,
 Cth65QvMCbaWipQlKN2Bt9vXdG+dpJW5Vy0yKA8/IUFRwf6CLr3Dl2JAiQEYhFcU
 rrVyQAFPwg6RUNz6MC8fudk=

Bennet Yee writes:
> 
> In message <9605222126.AA04598@mordred.sware.com>, Charles Watt writes:
> > [ separation into two protocols ]
> >
> > This is only common sense and good protocol design.  It certainly isn't 
> > "spec'ing out a fully generic, do everything protocol" and doesn't mention 
> > signing every packet.  I've no idea where you came up with that.  
> 
> I mentioned non-repudiation because you pointed me to Hannah as what
> is good.  I looked at Hannah and saw non-repudiation that as one of
> Hannah's optional specs.

Quickly scanning a lengthy protocol document obviously doesn't work.  Rather
than draw conclusions about a protocol from the three sentence overview,
it would be best to read the actual specification, which states quite
clearly that only the authentication handshake is signed and that the
only non-repudiation provided is non-repudiation of access to the service.

> 
> While separation into two protocols may be "common sense" most of the
> time, we do have to be careful that the separation does not break the
> links in trust between the keys from the key exchange/management
> protocol and the data privacy/integrity protocol.

This is a very valid point.  However correctly linking the two protocols
is pretty easy.

> > Of course they dictate system design.  There is no way to implement either
> > one except by bundling them together over the same communication channel.
> > They should be designed to support both in-band and out-of-band key
> > negotiation so that they can support datagram-based services and higher
> > assurance implementations.  This is not difficult to do, and the current
> > versions of SSL and PCT could be easily modified to meet these requirements
> > much like Hannah (NDSEP/PAKMP), IPSEC (IPSEC/Oakley), SP4 (SP4/KMP) and
> > other similar systems.
> 
> Both SSL and PCT send records down a pipe.  At the -cryptographic-
> -protocol- level, it matters naught whether that pipe is a single TCP
> connection or two or three.  The security of the protocols don't
> depend on this -- that is necessarily so, obviously.

True, cryptographically it should be possible to run either protocol 
out-of-band (by this I mean the key exchange/authentication phase runs
over one TCP connection while the actual data encryption runs over a 
second) provided that:

1) the authentication protocol carries sufficient information to identify
   the targeted second connection.  This is necessary to support multiuser
   systems where different listeners may have different identities.

2) there is sufficient linkage between the two phases to maintain the
   cryptographic properties of the protocol.

Both SSL and PCT provide #2, but do not provide adequate information for
#1.  Therefore, in their current state they cannot be used to support an
out-of-band implementation.

> The real difficulty with datagram-based services is that channel
> ciphers have state (DES CBC chaining variable, RC4 stream state, etc),
> and datagram support require either reliable datagrams or a new set of
> keys per datagram (in an extra header).  To directly modify SSL or PCT
> to send the data records over UDP by just having in-band or
> out-of-band key negotiation doesn't work: when each UDP datagram has
> per-datagram keys/iv in its header, it's very different from either
> in-band or out-of-band key negotiation.  And if you have reliable,
> in-order datagrams, you didn't really need any changes.

This is incorrect -- just look at IPSEC for guidance.  Currently SSL
really only runs (actual deployed versions) with RC4, a stream cipher.  
You cannot reuse the key for a stream cipher or you open the door for 
differential cryptanalysis.  However, it is quite secure to use a block 
cipher, such as DES-CBC, for multiple datagrams.  New IVs can be exchanged
easily in clear text header fields.  The chaining is only performed over 
a single datagram, i.e., the algorithm is re-initialized with each 
datagram.  There is no need to provide a reliable datagram service.

> Now, you are right that there are other ways to make dual channel (key
> mgmt vs traffic) work better.  And I wouldn't mind seeing datagram
> based services being supported if it wasn't too messy either.
> However, the TLS charter is for transport level security (TCP), not
> network level security (UDP/IP), so we should definitely weigh
> carefully the extra complexity overhead against the expected benefits
> (or change the charter).

Unless the networking textbooks have been rewritten recently, UDP is a 
transport layer protocol.  There is no extra complexity required of a
transport layer security protocol to support UDP, provided that you have 
designed the protocol properly in the first place.

> 
> As to whether we should make the TLS protocol by default permit both
> in-band and out-of-band key exchanges, I think that that might be just
> unnecessary complications.  Certainly cipher-spec negotiations are
> complicated enough; why add extra negotiation as to over which channel
> the keys should be exchanged?  Certainly with the primary consumer of
> the TLS protocol being web browsers (and maybe a few telnet sessions),
> the extra complexity doesn't seem to buy much.  What's the benefit to
> pay for the extra complexity?

Did you actually read any of my previous messages?  If you have a server
that has secured its http and telnet servers, but hasn't secured any thing 
else, such as ftp, rsh, nfs, your Web server is INSECURE.  It is insecure 
because an attacker can subvert the underlying system by exploiting a hole 
in one of the non-secured services.  Once in control of the system, it is 
pretty straight forward to grab control of the Web server's data stream 
above the transport layer security protocol.

If you properly split the transport layer security protocol into separate
key management/authentication and data security components, then you 
provide implementors with a choice.  They can either:

A) implement both components in-band within an application library, as both
  SSL and PCT do now.

B) implement them separately, perhaps putting the data security component
   within the protocol stack for stronger system security as does Hannah.

If the protocols are properly split, the two different implementations
can be made INTEROPERABLE. This means that when running a sensitive
application, such as on-line banking, the server could run with a
Hannah-like implementation to ensure that ALL of its services are 
protected.  The clients can run with just application support with no
need to modify their protocol stacks.

...

> I'll give an example as to why the DNS namespace and the certificate
> namespace needs some kind of linking.
> 
> Scenario: user sees a TV or a magazine ad.  The ad says to connect to
> http://foobar.com.  It doesn't matter whether it says LLBean or not.
> The user expects that if s/he connects to that site with his/her
> browser, s/he is visiting the site that placed the ad.  The user knows
> that the real, full name of the company is "Froobnitz Inc".
> 
> The certified namespace may very well say -- in the certificate --
> that the entity that posesses a certain private key is "Froobnitz
> Inc".  It should.  We can not, however, always expect the user to
> actually look at the cert.  Nor could we expect the user to
> communicate to the browser the name "Froobnitz Inc" -- getting them to
> type in a URL is enough already.
> 
> How does the user know, when s/he see the order form, that s/he is
> really talking to "Froobnitz Inc"?  S/he has to check the cert.  That
> seems to be your preferred approach.  Now, maybe we can automatically
> pop up the cert and force the user to click it away whenever s/he does
> a POST operation.  Any Human-Computer Interface person will tell you,
> however, that this will become automatic very quickly, and there would
> be very little security benefit in practice.  (And there are other
> security-relevant scenarios sans POSTing.)  And the attacker would end
> up with the user's credit card number.
> 
> Can we solve this problem at the protocol level?  Of course not.  Can
> we make some kinds of automatic checks feasible to make such an attack
> less likely?  Yes we can, by having CN checks.  (Even with checking IP
> addresses, lower IP-level spoofing [rather than DNS level] can still
> cause trouble -- the CN checks is just an extra band-aid.)  And no,
> I'm not saying to get rid of nice, human readable stuff -- by all
> means retain that as well, and display it to the user if possible.

As you suggest, the DNS linking approach is a band-aid.  An attacker can
control or spoof DNS such that they can appear as foobar.com.  They can
apply to one of the umpteen trusted CA's and get a certificate saying
they are foobar.com.  Your approach has now guaranteed that the user will 
be automatically fooled.  This is not speculation.  This attack has already 
been demonstrated.

As far as grabbing the user's credit card number goes, that is not a 
problem that is going to be solved by a transport layer security protocol, 
for the TLSP cannot know whether the server is authorized to view such 
an item -- to reduce fraud, only the banks should have access to the #,
not the merchants.  Only an application layer protocol, such as SET, along 
with a supporting public key infrastructure, will adequately protect credit 
card numbers.

> The issue does *not* boil down to trusting the CA.  It's more complex
> than that and requires a more holistic approach -- what it does boil
> down to is that we don't trust the users to not shoot themselves
> through negligence.  While for non-MAC systems we can't really -force-
> the users to behave securely (nor A or B level systems, but...) we can
> try to aid the users as much as possible.  If we don't, I think we're
> doing a bad job -- look, we've designed this unpickable lock; oops, we
> have to leave the windows open.

Of course it boils down to trusting the CA.  If you cannot trust the CA
to correctly match an entity with its identification information in 
whatever form that information may take, then the system cannot provide
adequate assurance for authentication.

You are correct, however, that there is more to the problem than just
trusting the CA, and that one important component is assisting the user.
But I suspect that this process can never be totally automated, for only
the user truly knows who it is the user is trying to contact.  A browser
can make an educated guess, but it is still a guess.  As a real-life
example, this winter I was planning a recruiting trip to Purdue.  I
brought my browser up on www.purdue.com, and was surprised to see a glowing
account of how Michigan trounced Purdue in football (the site has since
toned down its home page).  I was quite confused until I realized that I 
had really wanted www.purdue.edu.  No automatic checks in the world would 
have saved me if this had been a more convincing replica of what I had 
expected to see.  But a proper identification field within an X.509 
extension that was properly verified by a trustworthy CA and that was 
prominently displayed by the browser would have.

Charles Watt
SecureWare

-----END PRIVACY-ENHANCED MESSAGE-----


Follow-Ups: References: