- From: Phillip Hallam-Baker <hallam@gmail.com>
- Date: Sat, 18 Dec 2010 16:48:42 +0000
- To: Common Authentication Technologies - Next Generation <kitten@ietf.org>, websec <websec@ietf.org>, "saag@ietf.org" <saag@ietf.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, General discussion of application-layer protocols <apps-discuss@ietf.org>, "http-auth@ietf.org" <http-auth@ietf.org>
- Message-ID: <AANLkTi=iGWnBtOgPhN9tRtaJTxQhvRkjq3p0UCkRdT8=@mail.gmail.com>
I think that we need to distinguish between an authentication mechanism and an authentication infrastructure. Part of the problem with HTTP authentication is that it was quickly superseded by HTML based authentication mechanisms. And these in turn suffer from the problem that password authentication fails when people share their passwords across sites, which of course they have no choice but to do when every stupid web site requires them to create yet another stupid account. Since Digest Authentication became an RFC, I don't think there has ever been more than about 6 weeks elapsed without someone suggesting to me that we include SHA1 or SHA2 as a digest algorithm. Which is of course pointless when the major flaw in the authentication infrastructure is the lack of an authentication infrastructure. The original reason for designing Digest the way that I did was that public key cryptography was encumbered. Had public key cryptography been available, I would have used it. By authentication infrastructure, I mean an infrastructure that allows the user to employ the same credentials at multiple sites with minimal or no user interaction. I do not mean a framework that allows for the use of 20 different protocols for verifying a username and password. We do have almost as many proposals for federated authentication as authentication schemes of course. But each time there seems to be an obsession with things that technocrats obsess about and at best contempt for the actual user. OpenID almost succeeded. But why on earth did we have to adopt URIs as the means of representing a user account? And why was it necessary to design a spec around the notion that what mattered most in the design of the spec was the ability to hack together an account manager using obsolete versions of common scripting languages? Another feature of that debate I cannot understand is why we had to start talking about 'identity' as if it was some new and somehow profound problem that had only just been discovered. There is of course a standard for representing federated user accounts that has already emerged on the net. And once that is realized, the technical requirements of a solution become rather obvious. As Web sites discover that their account holders cannot remember their username, most have adopted email addresses as account identifiers. That is what we should use as the basis for federated web authentication. So if the user account identifier looks like username@example.com, how does an entity verify that a purported user has a valid claim to that account? The obvious mechanism in my view is to use DNS based discovery of an authentication service. For example, we might use the ESRV scheme I have been working on: _auth._ws.example.com ESRV 0 prot "_saml._ws" _auth._ws.example.com ESRV 0 prot "_xcat._ws" Which declares that the SAML and 'XCAT' (presumably kitten in XML) protocols may be used to resolve authentication requests. One major advantage of this approach is that it makes it easy for sites to move to using the new federated auth scheme. Most sites already store an email address that is used to validate the account. The actual mechanism by which the authentication claim is verified is not very interesting, nor does it particularly need to be standardized. What does require standardization is the ability to embed the protocol in 'the Web' in a fluent and secure manner. Here is how I suggest this be achieved: 1) HTTP header The Web browser attaches an offer of authentication by means of an account attached to a specific domain to (potentially) every request: Auth-N: domain=example.com If the server does not support Auth-N, the header will simply be ignored. Otherwise the server can ask for automated authentication. 2) HTTP Response If the server decides to use the authentication mechanism, it responds with information that tells the client what level of authentication is required. For example, a bank might require a 2 factor scheme. There is going to be at a minimum a nonce. Auth-N: snonce=<128bits> 3) HTTP Request It should be possible for the client to prove that it has ownership of the authentication token corresponding to the account. It is not necessarily the case that the account owner wants to reveal to the site all their information. For example, it may not even want the site to know the account name. This is all fairly easy to set up using symmetric techniques. Auth-N: domain=example.com; blindedaccount=<> snonce=<128bits>; cnonce=<128bits> One feature that the OpenID work has highlighted the need for is some form of user directed account manager. If the user is going to be in control of this process, they need to be able to specify what information is made available to specific sites. Conclusion: I think that what we require here is not yet another authentication framework or protocol. What we need is the glue to bind it into an infrastructure that makes it useful. The most important design decision is to make use of RFC822 email address format as the format for federated authentication accounts. Once that decision is made, the rest will simply fall out of stating the requirements precisely. The risk here is that yet again we end up redo-ing the parts that we know how to build rather than focus on the real problem which is fitting them together. Above all, the user has to be the first priority in any design.
Received on Saturday, 18 December 2010 16:49:18 UTC