W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2010

Re: [websec] [apps-discuss] [kitten] [saag] HTTP authentication: the next generation

From: Nathan <nathan@webr3.org>
Date: Sun, 19 Dec 2010 13:47:06 +0000
Message-ID: <4D0E0CDA.6030605@webr3.org>
To: Adrien de Croy <adrien@qbik.com>
CC: Phillip Hallam-Baker <hallam@gmail.com>, Common Authentication Technologies - Next Generation <kitten@ietf.org>, websec <websec@ietf.org>, "saag@ietf.org" <saag@ietf.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>, General discussion of application-layer protocols <apps-discuss@ietf.org>, "http-auth@ietf.org" <http-auth@ietf.org>, foaf-protocols <foaf-protocols@lists.foaf-project.org>, Story Henry <henry.story@bblfish.net>
Hi Adrien, All,

What you describe sounds very much like WebID Protocol (formerly 
FOAF+SSL) - there's an incubator group just starting at the W3C [1] 
although the protocol [2] has been under development for some time.

Essentially it leverages X509 certificates on the client side, which 
contains an identifier (in the form of a URI) which can then be 
dereferenced to machine readable data (containing the public key from 
the x509 and any other data the entity wishes to expose), it serves as 
decentralized stateless authentication / identification, compatible with 
and built on the deployed stack of internet technologies, and further 
enables all kinds of trust & reputation hooks.

cc'd: Henry Story, foaf-protocols

[1] http://bblfish.net/tmp/2010/12/15/webid-charter-draft.html
[2] http://getwebid.org/spec/drafts/ED-webid-20100809/index.html



Adrien de Croy wrote:
> I think we need to go a bit further and consider the issue of trust.
> one problem with delegating account-holding back to a domain under the control 
> of the account-holder, is you have no trust.  I could be hacker@hackyou.com or 
> spammer@spamyou.com.  I can set up whatever account I like.  Websites have no 
> information about whether I'm trustworthy or not, and have to build up their own 
> individual profile of me.
> To be really useful, the account-holding must be with a trusted independent 
> organisation able to be relied on by other websites. 
> The organisation then has the opportunity to add value by
> a) verifying the true identity of the account holder
> b) maintaining reputation information about the account holder
> c) revoking abusive accounts.
> Ends up looking a lot like X.509 certificate infrastructure.  Imagine if 
> everyone needed a client certificate to send any mail.  We'd have no spam.
> Of course these sorts of concepts are completely unpalatable to many people on 
> account of privacy issues. Some of these activities are the sort of things that 
> governments should really be doing (and already are in many cases).
> Solving this problem has implications for all internet use, not just HTTP.
> Regards
> Adrien
> On 19/12/2010 5:48 a.m., Phillip Hallam-Baker wrote:
>> I think that we need to distinguish between an authentication mechanism and an 
>> authentication infrastructure.
>> Part of the problem with HTTP authentication is that it was quickly superseded 
>> by HTML based authentication mechanisms. And these in turn suffer from the 
>> problem that password authentication fails when people share their passwords 
>> across sites, which of course they have no choice but to do when every stupid 
>> web site requires them to create yet another stupid account. 
>> Since Digest Authentication became an RFC, I don't think there has ever been 
>> more than about 6 weeks elapsed without someone suggesting to me that we 
>> include SHA1 or SHA2 as a digest algorithm. Which is of course pointless when 
>> the major flaw in the authentication infrastructure is the lack of an 
>> authentication infrastructure. The original reason for designing Digest the 
>> way that I did was that public key cryptography was encumbered. Had public key 
>> cryptography been available, I would have used it.
>> By authentication infrastructure, I mean an infrastructure that allows the 
>> user to employ the same credentials at multiple sites with minimal or no user 
>> interaction. I do not mean a framework that allows for the use of 20 different 
>> protocols for verifying a username and password.
>> We do have almost as many proposals for federated authentication as 
>> authentication schemes of course. But each time there seems to be an obsession 
>> with things that technocrats obsess about and at best contempt for the actual 
>> user.
>> OpenID almost succeeded. But why on earth did we have to adopt URIs as the 
>> means of representing a user account? And why was it necessary to design a 
>> spec around the notion that what mattered most in the design of the spec was 
>> the ability to hack together an account manager using obsolete versions of 
>> common scripting languages?
>> Another feature of that debate I cannot understand is why we had to start 
>> talking about 'identity' as if it was some new and somehow profound problem 
>> that had only just been discovered.
>> There is of course a standard for representing federated user accounts that 
>> has already emerged on the net. And once that is realized, the technical 
>> requirements of a solution become rather obvious.
>> As Web sites discover that their account holders cannot remember their 
>> username, most have adopted email addresses as account identifiers. That is 
>> what we should use as the basis for federated web authentication. 
>> So if the user account identifier looks like username@example.com 
>> <mailto:username@example.com>, how does an entity verify that a purported user 
>> has a valid claim to that account?
>> The obvious mechanism in my view is to use DNS based discovery of an 
>> authentication service. For example, we might use the ESRV scheme I have been 
>> working on:
>> _auth._ws.example.com <http://ws.example.com>  ESRV 0 prot "_saml._ws"
>> _auth._ws.example.com <http://ws.example.com>  ESRV 0 prot "_xcat._ws"
>> Which declares that the SAML and 'XCAT' (presumably kitten in XML) protocols 
>> may be used to resolve authentication requests.
>> One major advantage of this approach is that it makes it easy for sites to 
>> move to using the new federated auth scheme. Most sites already store an email 
>> address that is used to validate the account. 
>> The actual mechanism by which the authentication claim is verified is not very 
>> interesting, nor does it particularly need to be standardized. What does 
>> require standardization is the ability to embed the protocol in 'the Web' in a 
>> fluent and secure manner.
>> Here is how I suggest this be achieved:
>> 1) HTTP header
>> The Web browser attaches an offer of authentication by means  of an account 
>> attached to a specific domain to (potentially) every request:
>> Auth-N: domain=example.com <http://example.com>
>> If the server does not support Auth-N, the header will simply be ignored. 
>> Otherwise  the server can ask for automated authentication.
>> 2) HTTP Response
>> If the server decides to use the authentication mechanism, it responds with 
>> information that tells the client what level of authentication is required. 
>> For example, a bank might require a 2 factor scheme. There is going to be at a 
>> minimum a nonce.
>> Auth-N: snonce=<128bits>
>> 3) HTTP Request
>> It should be possible for the client to prove that it has ownership of the 
>> authentication token corresponding to the account. 
>> It is not necessarily the case that the account owner wants to reveal to the 
>> site all their information. For example, it may not even want the site to know 
>> the account name. This is all fairly easy to set up using symmetric techniques.
>> Auth-N: domain=example.com <http://example.com>; blindedaccount=<> 
>> snonce=<128bits>; cnonce=<128bits>
>> One feature that the OpenID work has highlighted the need for is some form of 
>> user directed account manager. If the user is going to be in control of this 
>> process, they need to be able to specify what information is made available to 
>> specific sites.
>> Conclusion:
>> I think that what we require here is not yet another authentication framework 
>> or protocol. What we need is the glue to bind it into an infrastructure that 
>> makes it useful.
>> The most important design decision is to make use of RFC822 email address 
>> format as the format for federated authentication accounts. 
>> Once that decision is made, the rest will simply fall out of stating the 
>> requirements precisely. 
>> The risk here is that yet again we end up redo-ing the parts that we know how 
>> to build rather than focus on the real problem which is fitting them together. 
>> Above all, the user has to be the first priority in any design. 
> -- 
> Adrien de Croy - WinGate Proxy Server - http://www.wingate.com
Received on Sunday, 19 December 2010 13:47:54 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:55 UTC