Re: [http-auth] [websec] [kitten] [saag] HTTP authentication: the next generation

Dear Yoav,

[notice: Reply-to limited]

(1) I largely agree with your view on Cookie and Client certificates.
Client certificates are only useful for Web world in very limited cases.
In reality, many banks in Japan (may be in US too?) introduce cert
authentication for corporate-account on-line banking but not for personal
account counterparts.  It introduce heavy burden for both server and client
sides, which cannot be justified for required security.

Also, use cases of Web authentication is not limited to the "strong identity"
which usually tied with the client certificates and CA hierarchies.
In many use-cases sites introduce simple interfaces accepting "almost anonymous"
registrations.  Issuing a certificate for every user of every such services is
unlikely, and tieing those accounts with existing client certificates introduces
serious privacy concerns.

(2) However, I am not agreeing with putting the authentication in the
layer lower than HTTP.  TLS authentication will only work for site-wide fixed
setting of authentications, which is not deployable for large numbers of
websites.  See how Yahoo serves for both unauthenticated and authenticated
users, and how Google implements their "GMail for your domains" (log in/out
control is independent for each domain-account, hosted on the single server).

More technically,  It is semantically wrong.  HTTP has two layers of "messaging
channels" inside it: the lower level is a "keep-alive" stream transport (TCP and
TLS), and the higher level is a packet interface made by pairs of a request and
a response.  From the application point of view, the authorization happens on
the higher level.  For each resources the server may either request or not
request the authentication, and the clients respond with that.  The resources on
a single server may belong to several separate "authentication realms",
including special "unauthenticated" one.  For such settings, authentication
should be tied to the "higher level" in whatever way.  If you put authentication
in the lower level, requests for the different realms wil mix up in the
inappropriate lower channel.  Changing those semantics is too hard and seems
inappropriate for any deployments.

This gives an answer for the reason that *for some Web/HTTP application* TLS
auth works: each of these is so simple enough that it only have a one
"authentication realm" inside it.  The corporate banking is so, and so is IPP.
It is also true for many non-HTTP TLS applications such as POP3 and IMAP.
But it is not true for general Web applications, end we cannot enforce such
design restrictions for Web services.

People often concern about the security impact which arise from putting auth in
the higher layer.  Such a problem can be solved technically, for example by
using channel bindings.  Careful designs of protocols can gives even better
operational properties than TLS-based authentications.  For example, our Mutual
auth proposal can be used securely on HTTPS, preventing man-in-the-middle
credential forwarding attacks, while allowing off-loading of TLS overhead to
existing hardware TLS accelerators without any changes.

(4) For non-TLS applications:  in the real world I've heard too much voices
about inapplicabilities of HTTPS in the real systems.  I think that
authentication in the non-HTTPS world MUST be improved, too.  In fact, not a
single person asked (requested) me whether our Mutual authentication can be used
for integrity protection on non-HTTPS responces, as HTTPS is not deployable for
whole their services, and they need to improve authentication (and integrity).

On 2010/12/13 17:49, Yoav Nir wrote:
> On Dec 13, 2010, at 2:49 AM, Marsh Ray wrote:
>> On 12/12/2010 04:39 PM, Roy T. Fielding wrote:
>>> Define them all and let's have a bake-off.  It has been 16 years
>>> since HTTP auth was taken out of our hands so that the security
>>> experts could define something perfect.  Zero progress so far.
>> Perhaps it's a bad idea?
> Disagree. Authentication is very much needed in the web. The current state is that websites have you fill in a form, and store a session cookie. We all know how this fails. An attacker can make a login page that looks like google's or paypal's or just as easily as the respective organizations, and gain access to their credentials.
> This is good enough for Google, who need the cookies to track your searches and emails so as to match them to ads. A little bit of someone searching the web with someone else's userid doesn't make much difference to them. But if I have some important information in my gmail account, or money in bankleumi, I would like to avoid sending my password in the clear to an attacker's site.  And things like SRP can do this. If SRP succeeds with a remote server, I know that it's the right server.
> Client certificates would work even better, but client certificates have their own set of problems, which is why they're not widely used.
> I agree with you that layering whatever authentication on top of HTTP without at least message authentication is not a good idea, so TLS seems like the right choice - all those bank and email sites are already doing it. But we do need something more useable than certificates, and for now, this means passwords.
>>> We
>>> should just define everything and let the security experts do what
>>> they do best -- find the holes and tell us what not to implement.
>> I know some professional pen-testers who would love that!
>> Check out these videos. This is what happens when you take a 
>> general-purpose authentication protocol and repurpose it for use across 
>> the internet for an insecure application protocol:
>> This case is NTLMv2, but the phenomenon is not limited to that.
>> The problem is that most general-purpose authentication protocols do not 
>> require enough specificity about the context of the authentication: who 
>> and what are you authenticating, to whom, and how does each side know 
>> it's operating under the same beliefs as the other?
>> This means that even if the client wants to be careful and authenticate 
>> only for the purpose of setting up a secure connection, the attacker can 
>> possibly forward that authentication to auth his own connection or 
>> transaction on some other service (on the same or even a different server).
>> Most auth protocols don't let the client strongly verify the server's 
>> identity before the client has to authenticate with his own. This is 
>> probably at least in part because it requires some common infrastructure 
>> to do this. So Kerberos and x509 PKI systems can authenticate the server 
>> (and sometimes even the target service), but most others do not.
>> Since HTTP lacks connection integrity, it's meaningless to speak of "an 
>> authenticated client". Perhaps the only thing that could be meaningfully 
>> authenticated is the request data itself. But auth protocols designed 
>> for setting up persistent connections typically don't have defined 
>> inputs for the message data/digest being signed, so it's often 
>> impractical to reuse them for that purpose.
>> These issues have been mostly addressed at the protocol level for TLS 
>> client cert authentication. If it really just comes down to deployment 
>> and client usability issues, it's hard to imagine coming up with 
>> something at another layer which would have less risk than building on 
>> top of that.
>> Deploying new uses of compatible, standard authentication protocols over 
>> insecure application protocols can be bad for the greater security 
>> ecosystem because it widens the field for cross-protocol attacks.
>> - Marsh
>> _______________________________________________
>> websec mailing list
>> Scanned by Check Point Total Security Gateway.
> _______________________________________________
> http-auth mailing list
大岩 寛   Yutaka Oiwa                       独立行政法人 産業技術総合研究所
            情報セキュリティ研究センター ソフトウェアセキュリティ研究チーム
                                      <>, <>
OpenPGP: id[440546B5] fp[7C9F 723A 7559 3246 229D  3139 8677 9BD2 4405 46B5]

Received on Tuesday, 14 December 2010 06:26:53 UTC