Re: [apps-discuss] [saag] [websec] [kitten] HTTP authentication: the next generation

OAUTH and Webfinger are both pretty much as good as you can expect to
achieve if you decide at the start that you are not going to modify the
infrastructure.

But that decision limits what you can expect to achieve.


A particular problem with OAuth is that you can only use the identity
providers that are supported by the particular site. So I had to use my
Twitter account to play spymaster. And when Twitter got shirty and started
blocking other players accounts they lost their game account and their
twitter account at the same time.

There are people who think I should get a facebook account just so that I
can use their application. Not happening.


As for WebFinger, it works, but not nearly as well as a clean DNS scheme.
DNS is designed to address this problem, HTTP is not. And solving the
problem with HTTP requires us to have a scheme that involves DNS + HTTP +
SSL + PKIX which is rather a lot of moving parts and to make matters work
some those parts are not just moving, they are changing.


I think it is a very good idea to look at WebFinger and OAuth and see how to
realize these approaches direct in the infrastructure. But they should be
seen as starting points, not ends.


On Sat, Jan 8, 2011 at 12:37 PM, Blaine Cook <romeda@gmail.com> wrote:

> Two points:
>
> 1. In this entire thread, no-one has mentioned OAuth. Maybe y'all
> don't like it, but it's used to authenticate more HTTP requests by
> volume and users than everything-except-cookies combined. You may want
> to consider the design of OAuth when proceeding with these
> discussions, rather than the laundry list of [completely] failed
> protocols.
>
> 2. With respect to federated auth, especially using email address-like
> identifiers, there has been a bevy of (deployed) work in this regard.
> The effort is called webfinger, and is worth a look. Instead of DNS,
> we use host-meta based HTTP lookups to dereference the identifiers.
> Many diaspora and status.net installs are using it today, and there
> are several proposals towards building a security & privacy
> infrastructure on top of webfinger (webid is one such proposal whose
> incorporation of client-side TLS certificates in a browser context
> makes me very weary of its potential for success).
>
> b.
>
> On 8 January 2011 08:21, Phillip Hallam-Baker <hallam@gmail.com> wrote:
> > On Thu, Jan 6, 2011 at 1:16 PM, Ben Laurie <benl@google.com> wrote:
> >>
> >> On 6 January 2011 16:03, David Morris <dwm@xpasc.com> wrote:
> >> >
> >> >
> >> > On Thu, 6 Jan 2011, Ben Laurie wrote:
> >> >
> >> >> The answer to this problem is hard, since it brings us back to taking
> >> >> the UI
> >> >> out of the sites hands.
> >> >
> >> > Which is only helpful if you can somehow gaurantee that the user agent
> >> > software hasn't been compromised. Not something I'd bet on...
> >>
> >> That's rather overstating it. It's perfectly helpful when the UA
> >> software hasn't been compromised, which is a non-zero fraction of the
> >> time.
> >>
> >> When the UA s/w has been compromised I'm quite happy to fail to fix
> >> the problem: the right answer to that is to improve the robustness of
> >> the UA.
> >
> > +1
> > If the UA is stuffed then the user is totally and utterly stuffed anyway.
> > In particular if the UA is stuffed then a forms based experience is just
> as
> > stuffed. If we are going to hypothecate attack models people have to be
> > willing to apply them to their preferred solution too.
> >
> > The sensible approach is to work out how to stop the user from being
> stuffed
> > e.g.
> >  * Comodo's free Anti-Virus with Default Deny Protection (TM)
> >  * Use code signing + trustworthy computing
> >  * Use a restricted browser
> > Now I have a lot of ideas on how we can tackle these, but they are not
> > relevant to this debate.
> >
> > I do however have a different take on the UI issue.
> > HTML forms did have an advantage over the pathetic UI that browsers
> provided
> > for BASIC and DIGEST (most don't even tell the user which is in use).
> > But a federated auth scheme supported at the HTTP level could be simpler
> > still. Instead of the user having to register for each site, they
> register
> > once. Instead of the user having to log in to each site they log in once
> per
> > session. Instead of the site having to manage lost passwords and
> forgotten
> > accounts because the user has hundreds, this problem does not exist.
> >
> > It is a user interface crisis that is driving this need in my view.
> >
> > --
> > Website: http://hallambaker.com/
> >
> >
> > _______________________________________________
> > apps-discuss mailing list
> > apps-discuss@ietf.org
> > https://www.ietf.org/mailman/listinfo/apps-discuss
> >
> >
>



-- 
Website: http://hallambaker.com/

Received on Saturday, 8 January 2011 17:53:20 UTC