W3C home > Mailing lists > Public > ietf-tls@w3.org > April to June 1996

Re: Missing requirements

From: Charles Watt <watt@sware.com>
Date: Wed, 22 May 1996 17:26:13 -0400 (EDT)
Message-Id: <9605222126.AA04598@mordred.sware.com>
To: bsy@cs.ucsd.edu
Cc: watt@sware.com, ietf-tls@w3.org
Proc-Type: 4,MIC-CLEAR
Content-Domain: RFC822

> > [...]  The point again is simply:
> > "Can the protocol support high security, high assurance implementations?"
> > If not, it is not a suitably generic protocol.  The current drafts of SSL 
> > and PCT.
> Cut-n-pasted from the proposed charter:
> > The TLS working group is a focused effort on providing security
> > features at the transport layer, rather than general purpose security
> > and key management mechanisms.  The standard track protocol
> > specification will provide methods for implementing privacy,
> > authentication, and integrity above the transport layer.
> And one of my points was that the TLS WG is not the IPSEC WG, nor is
> is it spec'ing out a fully generic, do everything protocol.  Certainly
> non-repudiation is outside the scope of TLS -- nobody has mentioned
> signing every message yet AFAIK.  Hannah's scope includes this (as an
> option); TLS's does not.

Bennet, I was very specific about what I thought need be done, which was:

>> - Specification of two independent protocols:
>>         1) A transport layer security protocol that provides for security-
>>            enhancement of network communications above the transport layer
>>            using a set of cryptographic keys supplied by some outside
>>            mechanism.  This should be designed such that it is independent
>>            of (2) allowing for the potential replacement of (2) at some
>>            future date.
>>         2) A key management and authentication protocol to support (1).
>>            This protocol need not be generic for supporting other IETF
>>            efforts such as IPSEC.  It is hoped that a unified IETF key
>>            management protocol will eventually emerge to supercede this
>>            protocol.
>> - Although initial default algorithms should be specified, the design of the
>>   protocols should be independent of any specific cryptographic algorithms
>>   to permit potential future upgrade.
>> - The design of the two protocols should permit two possible modes of
>>   operation:
>>         1) intermixed over a single communications channel (in-band key
>>            management) similar to the current operation of SSL and PCT.
>>         2) over independent communications channels (out-of-band key
>>            management) as suggested by existing transport security
>>            standards.

This is only common sense and good protocol design.  It certainly isn't 
"spec'ing out a fully generic, do everything protocol" and doesn't mention 
signing every packet.  I've no idea where you came up with that.  

> The protocols that you fault with being too integrated /
> insufficiently generic, e.g., SSL or PCT or STLP, however, do not
> necessarily dictate system design.  They certainly include necessary
> protocol version numbers etc to provide compatibility, as well as the
> numbering for crypto primitive negotiation/selection.  If the claim is
> that key exchange/management protocols evolve too quickly and that an
> upgrade path should exist, then the versioning will already take care
> of that at the wire-level, and as long as the implementations are
> relatively clean upgrading should not pose great difficulties.

Of course they dictate system design.  There is no way to implement either
one except by bundling them together over the same communication channel.
They should be designed to support both in-band and out-of-band key
negotiation so that they can support datagram-based services and higher
assurance implementations.  This is not difficult to do, and the current
versions of SSL and PCT could be easily modified to meet these requirements
much like Hannah (NDSEP/PAKMP), IPSEC (IPSEC/Oakley), SP4 (SP4/KMP) and
other similar systems.

> If what you're looking for is to explicitly name the record layer one
> protocol and the key exchange/management layer another protocol,
> that's fine.  It may help to clarify thinking to separate these even
> more explicitly.  A security protocol, by any other name, resists the
> same attacks.
> Regarding the fact that DNS is not secure, we seem to have different
> opinions based on that same fact.  You seem to want everybody to use
> the Hannah-provided certified namespace and nothing else.  I think
> that's impractical.

I have no desire to constrain anyone's name space.  I have a strong desire
to see an IETF security infrastructure built upon well-conceived, robust
mechanisms.  Linking the Common Name to the domain name isn't one.

> I argue that because DNS is not secure, something needs to be done in
> order to link the namespaces together.  We have the certified names in
> one namespace, and DNS names in another.  If the Hannah protocols were
> to be used, the application (e.g., Web browsers) will speak in one
> namespace (DNS names), but the secured transport that Hannah provides
> will talk in another (fully certified names).  If I give you an URL
> (which contains a DNS name) and your browser ends up connecting to a
> fully certified agent, that agent may or may not have the expected
> correspondence to the DNS name.  How the DNS name is "resolved" to the
> Hannah certified name must be verified.  If your applications are
> Hannah-aware and deal only with certified names, fine.  If your
> application thinks in one namespace that doesn't trivially map into
> the Hannah-supplied certified namespace, then we've still got a
> problem.

There are several problems with linking authenticated identity with
DNS names:

1) A domain name contains little meaningful information about the entity
   with which it is associated.  They are handed out first come, first 
   serve.  If a customer connects to www.llbean.com, are they connected
   to the famous mail order company or an imposter?

2) Because DNS is insecure, an attacker can, at least within an isolated
   region, assume any domain name they choose.  If they can convince any of
   the standard CA's supported by Netscape that they are www.netscape.com,
   or convince the user to accept a new CA, then they are netscape.

3) Do you suggest similar rules for client certificates?  If so, it is 
   unreasonable for me to be identified as watt@sware.com if I am also 
   watt@mindspring.com and watt@directpc.com.  If not, then the protocol 
   is worthless for many applications such as banking where it is perhaps
   more important to identify the client than the server.

My own opinion on this matter is that resolving authenticated name to
DNS name is a non-issue.  Who cares what domain name L.L. Bean uses for
their server as long as I am sure I am talking to L.L. Bean when I place
my order.  Establishing this assurance cannot be achieved by linking
the certificate name to the domain name.  I would prefer to encode
additional identification information in the certificate extension fields,
which can be verified by the CA during certification and displayed in 
convenient, human readable format.  But what ever scheme is chosen, the 
crux of the trust issue boils down to whether or not you can trust the 
other guy's CA to properly identify and register those entities that 
it certifies. 

But this is not a protocol issue and this is not the forum for deciding 
such issues.

> To provide security with application transparency, after the
> authentication protocol runs -- which authenticates in the certified
> namespace -- the implementation should try to make sure that the name
> supplied by the user -- which is a DNS name or a raw IP address -- has
> something to do with the authenticated peer.

How does this work with mobile IP?

Charles Watt
SecureWare, Inc.
Received on Wednesday, 22 May 1996 17:28:15 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:01:58 UTC