W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2011

Re: [apps-discuss] [saag] [websec] [kitten] HTTP authentication: the next generation

From: Blaine Cook <romeda@gmail.com>
Date: Sat, 8 Jan 2011 09:37:00 -0800
Message-ID: <AANLkTi=GpV3O-8RaankHnV96JMNaE-R947rWJhoVO7LL@mail.gmail.com>
To: Phillip Hallam-Baker <hallam@gmail.com>
Cc: Ben Laurie <benl@google.com>, "apps-discuss@ietf.org" <apps-discuss@ietf.org>, David Morris <dwm@xpasc.com>, websec <websec@ietf.org>, "kitten@ietf.org" <kitten@ietf.org>, "http-auth@ietf.org" <http-auth@ietf.org>, "saag@ietf.org" <saag@ietf.org>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Two points:

1. In this entire thread, no-one has mentioned OAuth. Maybe y'all
don't like it, but it's used to authenticate more HTTP requests by
volume and users than everything-except-cookies combined. You may want
to consider the design of OAuth when proceeding with these
discussions, rather than the laundry list of [completely] failed

2. With respect to federated auth, especially using email address-like
identifiers, there has been a bevy of (deployed) work in this regard.
The effort is called webfinger, and is worth a look. Instead of DNS,
we use host-meta based HTTP lookups to dereference the identifiers.
Many diaspora and status.net installs are using it today, and there
are several proposals towards building a security & privacy
infrastructure on top of webfinger (webid is one such proposal whose
incorporation of client-side TLS certificates in a browser context
makes me very weary of its potential for success).


On 8 January 2011 08:21, Phillip Hallam-Baker <hallam@gmail.com> wrote:
> On Thu, Jan 6, 2011 at 1:16 PM, Ben Laurie <benl@google.com> wrote:
>> On 6 January 2011 16:03, David Morris <dwm@xpasc.com> wrote:
>> >
>> >
>> > On Thu, 6 Jan 2011, Ben Laurie wrote:
>> >
>> >> The answer to this problem is hard, since it brings us back to taking
>> >> the UI
>> >> out of the sites hands.
>> >
>> > Which is only helpful if you can somehow gaurantee that the user agent
>> > software hasn't been compromised. Not something I'd bet on...
>> That's rather overstating it. It's perfectly helpful when the UA
>> software hasn't been compromised, which is a non-zero fraction of the
>> time.
>> When the UA s/w has been compromised I'm quite happy to fail to fix
>> the problem: the right answer to that is to improve the robustness of
>> the UA.
> +1
> If the UA is stuffed then the user is totally and utterly stuffed anyway.
> In particular if the UA is stuffed then a forms based experience is just as
> stuffed. If we are going to hypothecate attack models people have to be
> willing to apply them to their preferred solution too.
> The sensible approach is to work out how to stop the user from being stuffed
> e.g.
>  * Comodo's free Anti-Virus with Default Deny Protection (TM)
>  * Use code signing + trustworthy computing
>  * Use a restricted browser
> Now I have a lot of ideas on how we can tackle these, but they are not
> relevant to this debate.
> I do however have a different take on the UI issue.
> HTML forms did have an advantage over the pathetic UI that browsers provided
> for BASIC and DIGEST (most don't even tell the user which is in use).
> But a federated auth scheme supported at the HTTP level could be simpler
> still. Instead of the user having to register for each site, they register
> once. Instead of the user having to log in to each site they log in once per
> session. Instead of the site having to manage lost passwords and forgotten
> accounts because the user has hundreds, this problem does not exist.
> It is a user interface crisis that is driving this need in my view.
> --
> Website: http://hallambaker.com/
> _______________________________________________
> apps-discuss mailing list
> apps-discuss@ietf.org
> https://www.ietf.org/mailman/listinfo/apps-discuss
Received on Saturday, 8 January 2011 17:38:09 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:10:56 UTC