Re: Kerberos Authentication question(s)

Some clarifications about HTTP auth interactions inline...

On 26/06/2015 8:44 a.m., Michael B Allen wrote:
> Pls beware, I'm copying ietf-http-wg at w3.org so please remove one or
> the other addresses (or both!) when replying if necessary. Not sure
> about cross posting netiquette these days but this has become more
> about HTTP authentication (non-Enterprise points at the end) now than
> it is about Kerberos.
> 
> On Thu, Jun 25, 2015 at 12:39 PM, Benjamin Kaduk <kaduk@mit.edu> wrote:
>>> gotten better over the years). Note that the reason the Windows SSPI
>>> is used by Java is largely because there is otherwise no way to insert
>>> credentials into the Windows credential cache. It actually used to be
>>> possible but at some point early on MS decided this was probably not a
>>> good idea and so now Java and MIT or Heimdal or whoever cannot insert
>>> creds into the Windows credential cache. They have to just use the
>>
>> This is simply not true -- both MIT KfW and Heimdal for Windows can insert
>> Kerberos credentials into the Windows LSA credential cache.
> 
> Interesting. I don't know why I thought it wasn't possible but it's
> actually good to hear that it is.
> 
>>> Again I am not familiar with your "token" code but fortunately, even
>>> though the Authorization: Negotiate ... token is an Internet
>>> Explorer-ism, it's SPNEGO payload is largely compatible with the
>>
>> Huh?  HTTP-Negotiate is published as RFC 4559 and is supported by all
>> major browsers.  The token formats described therein are explicitly the
>> SPNEGO GSS-API token formats; there's no need to hedge with "largely".
> 
> I think it is amusing when someone starts citing RFCs for something
> like this. HTTP-Negotiate was initially used by Internet Explorer and
> Windows and more specifically is a product of the Windows SSPI and it
> is purely a Microsoft invention. So I think it is a little
> disingenuous to cite an RFC in this case. HTTP-Negotiate was defined
> by how Internet Explorer behaved and any RFC that properly states that
> behavior is mostly an afterthought. Having said that, I think it would
> be difficult for MS to change the behavior of SPNEGO now that there is
> an RFC for it and especially since the RFC was written mostly by folks
> from Microsoft.
> 
> As for the "largely" hedge, that is because some of these protocols
> are ill-defined and so naturally the RFCs have holes. For example,
> SPNEGO is supposed to "negotiate" a mechanism. But in practice it does
> no such thing and it's not entirely clear how it could negotiate a
> mechanism. Have you ever seen a client try to do NTLM and end up doing
> Kerberos or vice versa (meaning the server says "no, I can't do
> Kerberos, please try again with NTLM)? No. So SPNEGO is actually
> completely useless.
> 
> And it is equally funny that these "protocols" *try* to implement the
> same thing over and over. SPNEGO is used to select a mech and GSSAPI
> uses an OID to indicate the mech and SASL has a field that specifies
> the mech which leads to the odd scenario where SASL selects the GSSAPI
> mech which in turn has an OID for the actual mech used. Bah! This is
> at least extraneous and silly in light of the fact that in practice
> 99.999% of clients are ultimately just using Kerberos or NTLM.
> 
> And as long as I'm peeing in the punch, I would go so far as to say
> these "protocols" are BAD because it's another layer of stuff that
> someone has to implement and so things like integrity and
> confidentiality options are left out or implemented incorrectly
> because it's not clear how the various flags translate from one layer
> to the next.
> 
> So now you ask "ok genius, so what would have been the correct way to
> implement this stuff"? The answer is to just do / document what the
> actual working implementation does and not try to design a new Utopian
> protocol from round table at some tech conference (think X.500 and DER
> encoding). For example, in the case of HTTP authentication, it should
> have never used SPNEGO or GSSAPI. It should have just used raw
> Kerberos or raw NTLMSSP as defined by the MS InitializeSecurityContext
> function. Incedentally, this would have made negotiation fairly easy.
> 
> So I would argue HTTP authentication should have looked something like
> this (with "negotiation" thrown in for fun):
> 
> C: Authorization: NTLMSSP <optimistic token>
> S: WWW-Authenticate: Kerberos
> C: Authorization: Kerberos <optimistic token>
> S: 200 OK
> 
> Although in practice negotiation would not occur any more than it does
> now because 99.999% of clients are using two protocols and clients
> always favor Kerberos.
> 
>>> that do this type of stuff. HTTP authentication is actually a lot
>>> harder than it looks because HTTP is stateless and so technically
>>> trying to do authentication (which is inherently stateful) over a
>>> stateless protocol is something of an oxymoron. So test your code
>>> carefully using all of the possible environmental parameters like
>>> different servers or whatever.
>>
>> HTTP-Negotiate is even worse, in that it ties things to the underlying
>> TCP/TLS connection, which is a clear violation of the spec.  There are
>> mostly-reasonable ways to do HTTP authentication with strong crypto
>> (imagine an RPC-like system which gets a token that is used to
>> authenticate subsequent requests, with explicit parameters in the RPC to
>> preserve server state across iterations in the authentication loop), but
>> this one is not it.
> 
> Actually last I checked Kerberos over HTTP does not authenticate the
> TCP connection. The Kerberos ticket is actually re-submitted with
> every single request. But NTLM does authenticate the TCP connection
> which makes it a violation of the HTTP spec. It's not clear to me why
> an authenticated TLS connection is bad in some way.

Two reasons which interdepend:

* HTTP is designed to contain multiple TCP connection "hops". The common
Internet scenario is 2-3 hops, maybe more. On a centrally controlled LAN
or large enterprise multi-POP network there may be 2 hops.

* HTTP itself is stateless. In particular the proxy hops from above, may
be coalescing / multiplexing messages from multiple clients onto any
given server TCP connection. At least in situations where NTLM and
Negotiate are absent they will.


In order to support NTM protocol with its TCP-level binding proxies have
to completely disable almost all HTTP functionality and act as if they
were effectively SOCKS proxies. This is an extremely large performance
decrease.

Also there is a large latency increase due to the fact the NTLM
handshake as implemented by popular software requires two TCP
connections to be setup and torn down for each proxy transited. With 2
or more proxies in the chain this becomes almost impossible to succeed,
thus we tell people NTLM does not work at all over Internet.

Kerberos helps with its simpler handshake, but still the multi-hop
problems exist to prevent most HTTP mutiplexing related features being
used when performance is needed.


> Assuming the certs
> are proper and the chain of authority is validated by both ends and
> such, I would think that would be pretty secure. But re-submitting the
> Kerberos ticket with each request and / or using TLS just to make the
> auth step stateful is pretty inefficient (especially if there's a big
> PAC in the Kerberos ticket).

Efficiency there is relative to the HTTP mutiplexing behaviour. If a
single TCP connection is being used by N clients with small spikes in an
otherwise low background of traffic the TCP connection persistence can
vastly reduce the overall response latency for all clients. It also
frees up N-1 TCP sockets for use by other clients.

Proxies and servers are often dealing simultaneously with 3-4 orders of
magnitude more requests than a single client is sending. So reduction of
TCP connection count helps massively raise the server capacity and DoS
tolerance. That tends to be why HTTP caching proxies and CDN are used in
the first place.

> 
> However, I don't really blame HTTP-Negotiate or HTTP-NTLM for
> violating the HTTP spec because the HTTP spec provides NO WAY to do a
> complete stand-alone authentication. Authentication is inherently
> stateful because there is always some kind of "challenge" or "nonce"
> that one or both sides must factor into a hash that the other end
> needs to compute separately to prove that the other end knows The
> Secret. So if a client starts authentication with one HTTP server and
> a "nonce" is computed and saved in some state on the server but the
> next request gets routed to a different server, that server will have
> no knowledge of the nonce and thus authentication is not possible.
> 
> This is why Digest Authentication is vulnerable to replay attacks
> because it carries the "nonce" with the request. Because subsequent
> requests could go to a different server, the server cannot save the
> nonce to verify it's not being replayed. So a Digest server
> implementation would have to just trust the nonce to be truely
> stateless.
> 
> Note that Kerberos over HTTP is not a complete stand-alone
> authentication. The authentication already happened when the client
> got a TGT from a third party service. The client is just passing what
> is effectively a temporary key that must be validated by said third
> party.
> 
> I'm not sure what you mean by using RPCs but bear in mind that any
> kind of third party service could NOT be based on HTTP (because that
> would just be pushing the poop around without actually getting rid of
> it). And a non-HTTP based third party authentication service probably
> would not play well on the Internet. So HTTP sites are still
> processing plaintext passwords on the server which is of course
> ridiculously insecure ...
> 
> I haven't really thought too much about this but I have to wonder if
> it would be better to make HTTP optionally / partially stateful where
> a client could generate a "Client-ID" every once in a while which
> would just be a long random number and then require HTTP proxies and
> load balanacers and such to maintain an index of these IDs and then
> *try* to route requests to the same downstream server. I think they
> already pretty much have to do this for TLS

No. For TLS the connections to client and to server are forced to be
pinned together end-to-end and treated as SOCKS-like proxies same as
handling traffic with NTLM in it. The same TCP performance vs
multiplexing problems result, and additionally the encrypted content in
theory cannot be cached so there is not even a chance for caching
proxies to reduce the traffic load on the end-server. (sad reality is
it's just forcing TLS MITM to become popular).


> and proxies worth more
> than their weight in bytes probably already do this for session IDs to
> implement session stickyness. But with the Client-ID method it would
> not have to be tied to a TCP connection and with one new header we
> might knock out cookies and session ids and other such things which of
> course are just weak methods that try to work-around HTTP being
> stateless. WRT authentication, the server would just use the Client-ID
> to lookup the authentication state. And if the Client-ID also included
> an integrity code, that would go a looong way.
> 

Speaking for Squid HTTP caching proxy. We use the Negotiate ticket value
as presented in the message WWW-auth header. Comparing it to the
previous delivered one to ensure the client is still sending the same auth.

Due to some server implementations (including Squid itself due to above)
assuming connection ties we are still required to also pin the
connections together as with NTLM. That still leaves us with an
unfortunately high turnover in TCP sockets. But more HTTP features can
be used reliably than with NTLM.


PS. I am currently working on adding support to Squid for a HTTP scheme
"Kerberos" that uses the bare non-SPNEGO/GSSAPI token value. If there
are others interested in working out the details to get this going
without the TCP level pinning I am interested in collaboration.

HTH
Amos

Received on Friday, 26 June 2015 05:06:55 UTC