Re: Kerberos Authentication question(s)

Pls beware, I'm copying ietf-http-wg at w3.org so please remove one or
the other addresses (or both!) when replying if necessary. Not sure
about cross posting netiquette these days but this has become more
about HTTP authentication (non-Enterprise points at the end) now than
it is about Kerberos.

On Thu, Jun 25, 2015 at 12:39 PM, Benjamin Kaduk <kaduk@mit.edu> wrote:
>> gotten better over the years). Note that the reason the Windows SSPI
>> is used by Java is largely because there is otherwise no way to insert
>> credentials into the Windows credential cache. It actually used to be
>> possible but at some point early on MS decided this was probably not a
>> good idea and so now Java and MIT or Heimdal or whoever cannot insert
>> creds into the Windows credential cache. They have to just use the
> 
> This is simply not true -- both MIT KfW and Heimdal for Windows can insert
> Kerberos credentials into the Windows LSA credential cache.

Interesting. I don't know why I thought it wasn't possible but it's
actually good to hear that it is.

>> Again I am not familiar with your "token" code but fortunately, even
>> though the Authorization: Negotiate ... token is an Internet
>> Explorer-ism, it's SPNEGO payload is largely compatible with the
> 
> Huh?  HTTP-Negotiate is published as RFC 4559 and is supported by all
> major browsers.  The token formats described therein are explicitly the
> SPNEGO GSS-API token formats; there's no need to hedge with "largely".

I think it is amusing when someone starts citing RFCs for something
like this. HTTP-Negotiate was initially used by Internet Explorer and
Windows and more specifically is a product of the Windows SSPI and it
is purely a Microsoft invention. So I think it is a little
disingenuous to cite an RFC in this case. HTTP-Negotiate was defined
by how Internet Explorer behaved and any RFC that properly states that
behavior is mostly an afterthought. Having said that, I think it would
be difficult for MS to change the behavior of SPNEGO now that there is
an RFC for it and especially since the RFC was written mostly by folks
from Microsoft.

As for the "largely" hedge, that is because some of these protocols
are ill-defined and so naturally the RFCs have holes. For example,
SPNEGO is supposed to "negotiate" a mechanism. But in practice it does
no such thing and it's not entirely clear how it could negotiate a
mechanism. Have you ever seen a client try to do NTLM and end up doing
Kerberos or vice versa (meaning the server says "no, I can't do
Kerberos, please try again with NTLM)? No. So SPNEGO is actually
completely useless.

And it is equally funny that these "protocols" *try* to implement the
same thing over and over. SPNEGO is used to select a mech and GSSAPI
uses an OID to indicate the mech and SASL has a field that specifies
the mech which leads to the odd scenario where SASL selects the GSSAPI
mech which in turn has an OID for the actual mech used. Bah! This is
at least extraneous and silly in light of the fact that in practice
99.999% of clients are ultimately just using Kerberos or NTLM.

And as long as I'm peeing in the punch, I would go so far as to say
these "protocols" are BAD because it's another layer of stuff that
someone has to implement and so things like integrity and
confidentiality options are left out or implemented incorrectly
because it's not clear how the various flags translate from one layer
to the next.

So now you ask "ok genius, so what would have been the correct way to
implement this stuff"? The answer is to just do / document what the
actual working implementation does and not try to design a new Utopian
protocol from round table at some tech conference (think X.500 and DER
encoding). For example, in the case of HTTP authentication, it should
have never used SPNEGO or GSSAPI. It should have just used raw
Kerberos or raw NTLMSSP as defined by the MS InitializeSecurityContext
function. Incedentally, this would have made negotiation fairly easy.

So I would argue HTTP authentication should have looked something like
this (with "negotiation" thrown in for fun):

C: Authorization: NTLMSSP <optimistic token>
S: WWW-Authenticate: Kerberos
C: Authorization: Kerberos <optimistic token>
S: 200 OK

Although in practice negotiation would not occur any more than it does
now because 99.999% of clients are using two protocols and clients
always favor Kerberos.

>> that do this type of stuff. HTTP authentication is actually a lot
>> harder than it looks because HTTP is stateless and so technically
>> trying to do authentication (which is inherently stateful) over a
>> stateless protocol is something of an oxymoron. So test your code
>> carefully using all of the possible environmental parameters like
>> different servers or whatever.
> 
> HTTP-Negotiate is even worse, in that it ties things to the underlying
> TCP/TLS connection, which is a clear violation of the spec.  There are
> mostly-reasonable ways to do HTTP authentication with strong crypto
> (imagine an RPC-like system which gets a token that is used to
> authenticate subsequent requests, with explicit parameters in the RPC to
> preserve server state across iterations in the authentication loop), but
> this one is not it.

Actually last I checked Kerberos over HTTP does not authenticate the
TCP connection. The Kerberos ticket is actually re-submitted with
every single request. But NTLM does authenticate the TCP connection
which makes it a violation of the HTTP spec. It's not clear to me why
an authenticated TLS connection is bad in some way. Assuming the certs
are proper and the chain of authority is validated by both ends and
such, I would think that would be pretty secure. But re-submitting the
Kerberos ticket with each request and / or using TLS just to make the
auth step stateful is pretty inefficient (especially if there's a big
PAC in the Kerberos ticket).

However, I don't really blame HTTP-Negotiate or HTTP-NTLM for
violating the HTTP spec because the HTTP spec provides NO WAY to do a
complete stand-alone authentication. Authentication is inherently
stateful because there is always some kind of "challenge" or "nonce"
that one or both sides must factor into a hash that the other end
needs to compute separately to prove that the other end knows The
Secret. So if a client starts authentication with one HTTP server and
a "nonce" is computed and saved in some state on the server but the
next request gets routed to a different server, that server will have
no knowledge of the nonce and thus authentication is not possible.

This is why Digest Authentication is vulnerable to replay attacks
because it carries the "nonce" with the request. Because subsequent
requests could go to a different server, the server cannot save the
nonce to verify it's not being replayed. So a Digest server
implementation would have to just trust the nonce to be truely
stateless.

Note that Kerberos over HTTP is not a complete stand-alone
authentication. The authentication already happened when the client
got a TGT from a third party service. The client is just passing what
is effectively a temporary key that must be validated by said third
party.

I'm not sure what you mean by using RPCs but bear in mind that any
kind of third party service could NOT be based on HTTP (because that
would just be pushing the poop around without actually getting rid of
it). And a non-HTTP based third party authentication service probably
would not play well on the Internet. So HTTP sites are still
processing plaintext passwords on the server which is of course
ridiculously insecure ...

I haven't really thought too much about this but I have to wonder if
it would be better to make HTTP optionally / partially stateful where
a client could generate a "Client-ID" every once in a while which
would just be a long random number and then require HTTP proxies and
load balanacers and such to maintain an index of these IDs and then
*try* to route requests to the same downstream server. I think they
already pretty much have to do this for TLS and proxies worth more
than their weight in bytes probably already do this for session IDs to
implement session stickyness. But with the Client-ID method it would
not have to be tied to a TCP connection and with one new header we
might knock out cookies and session ids and other such things which of
course are just weak methods that try to work-around HTTP being
stateless. WRT authentication, the server would just use the Client-ID
to lookup the authentication state. And if the Client-ID also included
an integrity code, that would go a looong way.

Mike

-- 
Michael B Allen
Java Active Directory Integration
http://www.ioplex.com/

Received on Thursday, 25 June 2015 20:57:45 UTC