W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1996

Re: Signatures and Authentication information must go at end of meesage.

From: Ned Freed <NED@innosoft.com>
Date: Thu, 08 Feb 1996 14:19:28 -0800 (PST)
To: hallam@w3.org
Cc: Ian Duncan <id@cc.mcgill.ca>, http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com, hallam@w3.org
Message-Id: <01I0Z3KQAR649PLST1@INNOSOFT.COM>
> > And I'd thought with all those fine modern physicists working there that
> > Oxford would have finally left 19th C. methods of modelling the universe
> > behind. As Ned Freed and Jeff Mogul generously explained, unintentionally
> > spitting out any very large random number is significantly smaller than
> > other more harmful sources of noise in the system.

> As a certain NASDAQ quoted company recently discovered it can be difficult to
> induce random behaviour in a computer system. Computers are very deterministic
> systems and unfortunately few provide any access to the randomess of the
> quantum principles on which they are built.

This is simply not true. Computers in and of themselves do not provide much in
the way of nondeterministic sources, but computers do not exist in isolation.
For a computer system to be interesting it has to accept input from the outside
world and those input sources are full of entropy. There may not be that many
bits of entropy in any given source -- sometimes there are very few -- but
there are usually lots of sources available.

Add to this the fact that it is easy to combine different sources of entropy
(quite possibly extending back to when the system was first initialized, if not
before via data included in the initialization sequence) and the result is that
getting ahold of more than enough bits of entropy to move the chance of a
collision far below the chance of a network error simply isn't that hard.

> Consider the following Gotcha proceedure:

> Fred needs to create a MIME boundary, he hashes the URI of the request with MD5.
> Unfortunately the document includes an MD5 of its URI.

Fred's implementation is broken then, and so is the implementation that sent
the original document. I could implement production of HTTP headers
incorrectly, but doing so is not an indictment of the format of HTTP Headers.

RFC1750 is an excellent description of how to do this properly.

> Stop pretending that you are generating "random" boundaries. You are not, you
> are simply applying a deterministic proceedure which results in an even
> distribution of outcomes. The same inputs produce the same outputs however.

We do use the word "random" carelessly. Truely random sources like nuclear
decay or capacitance effects or free-running oscillators are not readily
available on most systems. Such sources are now routinely implemented in
cryptographic hardware, however, and should hardware cryptographic support ever
get to be more common I'd expect the use of such sources to become commonplace.

However, the issue of whether or not something is truly random isn't especially
relevant. The goal is to use pseudo-random material in a fashion that closely
approximates true randomness. This is why I always characterize these things in
terms of entropy, not in terms of every bit of the output being truly random
and independent and so on.

> Any reasonable proceedure for generating a boundary is likely to find itself
> repeated by chance. Given the difficulty people have in generating randomness
> for situations where they know it to be critical I'm none too confident of their
> ability to generate it when they don't understand the importance.

There's a huge difference between "unpredictable" and "lots of entropy".
Suppose you have a source with lots of entropy and two observers of that
source. Each observer can predict the other's observation with total
reliability. Yet the observations of each still have high entropy.

Generation of good boundary markers depends on entropy, not unpredictability.
Someone's ability to predict the boundaries that might be generated on a given
system doesn't mean that the system is unworkable.

Cryptographic strength, on the other hand, depends on both randomness and
unpredictability. A hash of lots of input sources having some degree of entropy
may be sufficient to generate material of sufficient randomness to use for
boundary markers. However, it is probably too predictable to use for
cryptographic purposes, where you have to assume that at least some of the
input sources may have been observed by an enemy. For things to be
unpredictable you have to have sources available that cannot be predicted by
someone else.

Nevertheless, the operational difficulties we have seen have been the result of
truly pathetic implementations. These implementations weren't even up to the
task of generating boundary markers in my opinion, let alone being competent to
generate something that is cryptographically strong.

> I understand fully the intention of the "proof", it is the axioms which I
> reject. Specifically the belief that a computer system generates events with
> disjoint probabilities.

You're right to the extent that computer systems aren't good sources of entropy
in and of themselves. But you're ignoring the context in which computers (at
least the ones we're concerned with) operate.

				Ned
Received on Thursday, 8 February 1996 15:02:04 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:31:44 EDT