RE: "Onion model" explained

Sorry for the delay, I was busy with the SAML Interop this week. 

I think this was a useful discussion, let me highlight a few points.

1. I still maintain that Authentiation is never an end in itself, it is a
step that collects data to be used in some other decision. Pete Wenzel said
it best:

"It is not a very interesting application that performs authentication, then
continues in the same manner regardless of the outcome of the
authentication."

I interpret the phrase "not very interesting" to mean this is not a usecase
for us to base our requirements on.

Whether the AuthZ is done by OS, middleware, application code, or a human
being (rare in the context of a server or A2A) is a implementation choice,
not an architectural one. Its still AuthZ.

To return to the pragamtics of this for the STF, I don't think we have to
address all aspects of AuthZ, for example we can probably defer policy
representation, however I think we must address basic AuthZ inputs, such as
User Attributes and possibly AuthZ Decision Requests.

2. It is a fact beyond dispute that any cryptograpic authentication scheme
requires that the party being authenticated prove that they know the secret
or private key in question, by performing a encryption or signature with it.
A certificate is only one of many methods of validating that the correct key
has been used. Merely presenting a certificate is not AuthN at all, it is
just a way of making an unsubstantiated claim.

This is certainly a side issue. However, this error is made so frequently in
the press, I tend to jump on it immediately. It is my understanding that not
everyone on this list is equally familiar with security technology.

3. Since you (Joe) did not accept my invitation to describe precisely what
you mean by a "secure heartbeat application", let me guess. I am thinking
that there are some number of distributed processes that need to make a
central server aware that they are still functioning (or at least the
heartbeat component is.) The central server keeps track of them and if it
dos not hear from one for a certain period of time, it takes some action.
Because it does not want to be fooled by an attacker (who perhaps has
executed a denial of service attack) it insists that messages be
authenticate. Is this what you had in mind?

Presumably every time the server receives a message it performs two AuthZ
steps. 

1. If the AuthN failed or did not occur, ignore the message. 
2. Use the authenticated identity to locate some information about the last
time it heard from that particular process. (This is an example of instance
scoped access control, which has been the subject of recent work.)

Another point is it seems to me overwhelmingly likely that the server would
be configured in some way to know the processes it was supposed to be timing
out. Assuming that, the design you proposed seems excessively inefficient.
SSL/TLS requires the exchange of several messages, including the entire cert
chain every time. Assuming you know the players, you don't need certs at
all, but just a table of keys. You could even use a shared secret keyed
HMAC, instead of a PK signature.

This seems to me to be an application that screams "datagram." I would have
each process send a signed timestamp at interval T. If the server doesn't
hear from a process for a period of KT, where K is something like 3,
depending on the reliability of the network, it times the process out. I
would be inclined to use UDP, which was designed for apps like this, but if
that is too shocking, then one way SOAP messages could be used. ;-)

The main point is that this appears to support Mark Baker's comments on
another thread that sometimes reliable messaging is not required or
desirable.

Hal

Received on Friday, 19 July 2002 09:28:22 UTC