Re: Merged Transport Layer Protocol Development

Actually, using the pre-encrypted data mechanism to handle
hardware-encrypted data is *not* appropriate.  The problem is as
follows: the ChangeCipherSpec message is defined to send the key for
the pre-encrypted data.  If we want to claim that the encryption
hardware is more secure than the host handling that connection, we
should not trust the host to properly encrypt the hardware encryption
key for transfer to the client either.  Certainly if the hardware key
is encrypted with the TLS session key, the security of (the
pre-encrypted portion of) the channel is only as good as the worse of
the two keys.

As for the comment about Moore's law, I don't think we should rely on
it to mask our inefficiencies.  We -should- worry about it when we
decide on things like key lengths, rather (but that's a different
topic.)  Yes, the pre-encrypted data idea is oriented towards reducing
the load on heavily loaded servers that need security, not their
clients -- that's precisely the scenario that I had drawn in the first
message.  While this is not a _general_ point-to-point transport
security issue, running TLS on web servers _is_ the main application.
(People who want to use TLS for secure telnet is unlikely to care --
though the pre-encrypted data mechanism can benefit that as well when
used with a slow stream cipher.)

Regarding the idea of renegotiating for NULL encryption when sending
pre-encrypted data, that is not a good idea.  The idea of providing
the pre-encryption mechanism (also applies to the on-the-fly
compression found in SSLv3) is to hide the complexity from the
client.  Yes, if client programs are all smart and understand when the
data stream changes from normally encrypted traffic to
pre-encrypted-and-NULL-encrypted traffic, things would work, but the
point of providing the TLS abstraction is to hide such complexities
from the clients.  After all, the clients could just as well do their
own crypto.

> 1) How much time does the PCTv2 pre-encryption handshake save over
> the standard SSLv3 resume-session handshake?, and

These are apples and oranges.  Resume session provides a means to
avoid the public key crypto overhead while obtaining fresh session
keys.  These session keys are used to private-key encrypt the
traffic.  Because of the way these keys are derived, this mechanism
forces on-the-fly encryption and decryption.

The pre-encryption mechanism assumes session keys are in place,
regardless of whether they were derived from a complete handshake or
from a resumed session.  The pre-encryption mechanism aims to avoid
the private-key encryption overhead for data that is (re)transmitted
to many parties by permitting "backend" encryption at the server
(change keys once in a while when machine is lightly loaded, etc).

> 2) if the answer to 1 is greater than epsilon, what analysis has been
> done to show that the pre-encryption handshake does not introduce
> new vulnerabilities to the protocol?

The same kind of analysis that is due any security protocol -- review
by security researchers/practitioners.  I don't know whether enough
has been done yet -- personally, I've looked at it for only a few
hours (classes to teach, grants to apply for :), and haven't looked at
the protocol as a whole at all (there may be interactions between how
pre-encryption is handled and other parts of STLP.)

I don't see anything wrong with the idea of making Web security
technologies more scalable.  It does not reduce the security from
existing practice, and rather than just acting on fear that a
particular design may be erroneous (which I believe isn't), I would
hope that we instead look at exactly how much more complicated does it
make the protocol (which I believe is "not significantly") and weigh
the risks against the benefits.

-bsy

--------
Bennet S. Yee		Phone: +1 619 534 4614	    Email: bsy@cs.ucsd.edu

Web:	http://www-cse.ucsd.edu/users/bsy/
USPS:	Dept of Comp Sci and Eng, 0114, UC San Diego, La Jolla, CA 92093-0114

Received on Wednesday, 24 April 1996 19:03:31 UTC