W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2011

Re: [saag] [websec] [kitten] HTTP authentication: the next generation

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Sat, 8 Jan 2011 11:07:38 -0500
Message-ID: <AANLkTingp=V4KFWaEjUWPvNraNT3H6T_rXcC_8CmEeYW@mail.gmail.com>
To: Ben Laurie <benl@google.com>
Cc: Robert Sayre <sayrer@gmail.com>, "apps-discuss@ietf.org" <apps-discuss@ietf.org>, "Roy T. Fielding" <fielding@gbiv.com>, websec <websec@ietf.org>, "kitten@ietf.org" <kitten@ietf.org>, "http-auth@ietf.org" <http-auth@ietf.org>, "saag@ietf.org" <saag@ietf.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
I think that Ben is right that we are solving the wrong problem.

The problem is that users are asked to maintain accounts at literally
HUNDREDS of accounts.

And some cretins, some utter morons, some bog-brained berks think it is
reasonable to tell the user to have a different password for every one!


I can't remember the account names, the password is easy as I only had one
(for non financial) - until those cretins at Gawker screwed up. Now I have
to reset my password at all those places.


We have to solve the federated auth problem and it is really, really easy:

Account Name is the RFC 821/822 email address.

This is what the Web has started to adopt of its own accord as a user
account identifier. It is the only one that people can remember reliably.


Authentication service is resolved via DNS service lookup

i.e. SRV or similar. I can show people how to fix up the issues to do with
use of non-canonical names.


Client authenticates to authentication service using any protocol they both
support.

This is quite simple to implement, just stick a list of supported auth
services in the DNS.

We can re-use all those existing auth protocols that work (SAML would be a
good choice but we don't need to be overly restrictive here.)


HTTP carries a standardized, non-linkable auth token

I have some ideas on how we could modify DIGEST to do this. DIGEST would not
be problematic if the password had 128 bits of ergodicity and we upgraded
the digest function.

The reason for re-using DIGEST here would be to avoid patent encumbrances. I
considered the issue of linkability at great length when writing the
original DIGEST design.



At the moment I am focused on getting the foundation laid. But I will try to
come up with a full proposal before Prague.


I know that you can achieve some of the desired authentication properties
with public keys at the client. The problem is that our current use of
computers has gone way beyond the one-machine-per-person paradigm

On a recent trip to Europe with the family I counted that we had 10
computers with us capable of supporting IP (3 laptops, 3 iPhones, 2
Nintendos, 1 iPad and a kindle).

Cardspace has some really, really great properties but they are totally lost
when you try to make the service accessible from multiple machines by
putting it 'in the cloud'. In fact, other than a manager, I have never found
anyone who reven thinks they know what they mean by 'in the cloud' for
CardSpace. I certainly have never seen an explanation I can understand.

Devices get lost. Devices get stolen. We don't want to encourage that so
there needs to be something more than just a certificate based client auth
scheme.


I think we need some form of centralized (for given account) account
management in the mix so that the user can authorize/deauthorize devices for
use (c.f. Amazon's Kindle account management)

So there are basically two architectural options for using public key. One
is to use it strictly between the client and the auth service and use a
token like approach as discussed above. Another is for the auth service to
issue an assertion of the form 'you can tell its fred by this public key
(amongst others)'.

SAML already has support for both approaches BTW.


On Thu, Jan 6, 2011 at 10:31 AM, Ben Laurie <benl@google.com> wrote:
>
>
> On 6 January 2011 01:28, Robert Sayre <sayrer@gmail.com> wrote:
>>
>> > Peter Saint-Andre <stpeter@stpeter.im> wrote:
>> > 2. In 2007, Robert Sayre put together a few slides on the topic:
>> > http://people.mozilla.com/~sayrer/2007/auth.html
>>
>> These are back on the Web, in case anyone missed them (probably not).
>>
>> On Sun, Dec 12, 2010 at 5:39 PM, Roy T. Fielding <fielding@gbiv.com>
>> wrote:
>> >
>> > Define them all and let's have a bake-off.  It has been 16 years since
>> > HTTP auth was taken out of our hands so that the security experts could
>> > define something perfect.  Zero progress so far.
>>
>> I think the IETF might do better to focus on a smaller problem, at
>> first. People often use self-signed certificates with HTTP/TLS, even
>> though the first thing their websites ask the user to do is type a
>> username and password into a form. There are some well-understood ways
>> to make this process more secure. Why hasn't the IETF fixed this
>> problem? If this smaller problem has no ready solution, then the
>> larger issue of authentication on the entire Web seems like a tough
>> nut to crack.
>
> Two comments (one really being a response to Roy):
> 1. The IETF has fixed the problem, but no-one is using the fix - perhaps
> because it is not clear that it is the fix. I speak of RFC 4279, TLS
> pre-shared keys. These could be derived from a hash of the password and
the
> site name, for example, and thus provide secure mutual authentication
> despite password reuse.
> 2. I have often heard (though I am not aware of hard evidence for this,
> nevertheless I find it plausible) that one reason no-one has bothered to
> improve HTTP auth is because no-one would use it since site owners want to
> control the user experience around signin. It seems to me, therefore, that
> HTTP is the wrong layer to fix the problem at - it needs to be pushed down
> into HTML or Javascript so that the page can control the look, while
> appropriate HTML elements or JS code can deal with the secure exchange of
> data.
> Of course, this still leaves the issue of trusted path: although we can
> provide elements which are safe to use, even when being phished, how does
> the user know those elements are actually being used, rather than
simulated
> so as to get hold of the underlying password?
> The answer to this problem is hard, since it brings us back to taking the
UI
> out of the sites hands.
>
>>
>> It could be that the reasons for this lack of progress are
>> nontechnical. Just throwing that out there.
>
> If you think UI is nontechnical, then I agree.
> Cheers,
> Ben.
>
> _______________________________________________
> saag mailing list
> saag@ietf.org
> https://www.ietf.org/mailman/listinfo/saag
>
>



-- 
Website: http://hallambaker.com/
Received on Saturday, 8 January 2011 16:08:11 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:36 GMT