- From: Kyle Rose <krose@krose.org>
- Date: Thu, 22 Oct 2015 14:11:20 -0400
- To: Martin Thomson <martin.thomson@gmail.com>
- Cc: HTTP Working Group <ietf-http-wg@w3.org>
- Message-ID: <CAJU8_nW44dLrATV=P3SSCQ7VSJk+yyV=2F+1q5KKP5pZ6RO53Q@mail.gmail.com>
Sorry to take so long to respond to this, but life and work get in the way. I wanted to give a perspective on how one enterprise uses client certificates in real life, and the reasons for that. Like all great decisions, the one my company made to introduce client certs was the result of a compromise. The original requirement was for a second factor ("something you have") in addition to a password ("something you know"), because the thing you know is often of poor quality in order to make remembering it easy: the standard way to get around password re-use restrictions on rotation, for instance, is to take a prefix that never changes and append a number in monotonically-increasing fashion at each rotation. (Why password rotation in the first place? Because PCI DSS ยง8.2.4.) Trials were performed with HOTP tokens, which did not go well for various reasons I'm not privy to, though I've heard that will be revisited in the next year or two. There was also a desire to have a solution that reduced the attack surface. This is where the decision to use client certs intersects with the conversation here. One of the properties of our specific MO for client certs is that a client cannot complete a TLS handshake with any of the RPs without first presenting an acceptable client certificate (which may then result in a redirect to the SSO IdP/token issuer if successful). What this does is create a firewall around applications with potentially faulty authorization logic. It also makes for a very poor user experience when certificates expire, but this was deemed "acceptable" in compromise-logic. Having a client cert also raises the bar ever so slightly to access from unauthorized equipment. Yes, it's trivial to transfer the cert to a browser on another machine, but this is a triviality that few will bother with, especially among the demographic whose collective attention paid to security is most troublesome. Keeping 3 year-old Android phones 2 years past their most recent security update away from corporate resources is a feature, not a bug. Sealing the deal, our CDN happens to support client certificates: we can eat our own dog food *and* do it while requiring a second factor. Profit. The way in which this is relevant is that it would be nice to present the user a better error than "ssl_error_handshake_failure_alert" in the case of an expired or missing certificate. To that end, switching to a model in which any client can connect and perform some actions, and only later be required to authenticate, would be helpful. The downside of supporting this is that the handshake-time firewall between RP authentication logic and clients would be eliminated; the upside is that the first principle of IT is bureaucratic inertia, so I suspect that justification would be silently dropped in favor of keeping the second factor scheme and improving the UX. As for H/1.1 and TLS <=1.2, we presumably disabled renegotiation entirely due to the potential for downgrade attacks, and so are unlikely to solve the UX issue prior to TLS 1.3. Kyle On Wed, Sep 23, 2015 at 1:16 PM, Martin Thomson <martin.thomson@gmail.com> wrote: > The minutes of the TLS interim have been posted. Some decisions regard > client authentication were made. > > > https://www.ietf.org/proceedings/interim/2015/09/21/tls/minutes/minutes-interim-2015-tls-3 > > Here is a summary of the applicable pieces, plus what I options it > provides HTTP/2... > > (Caveat here: aspects of this could change if new information is > presented, but it seems unlikely that there will be changes that will > affect the core decisions.) > The big change is that a server can request client authentication at any > time. A server may also make multiple such requests. Those multiple > requests could even be concurrent. > > The security claims associated with client authentication require more > analysis before we can be certain, but the basic idea is that > authentication merely provides the proof that a server needs to regard the > entire session to be authentic. In other words, client authentication will > apply retroactively. This could allow a request sent prior to > authentication to be considered authenticated. This is a property that is > implicitly relied on for the existing renegotiation cases and one that we > might want to exploit. > > Each certificate request includes an identifier that allows it to be > correlated with the certificate that is produced in response. This also > allows for correlating with application context. This is what I think that > we can use to fix HTTP/2. > > Clients cannot spontaneously authenticate, which invalidates the designs I > have proposed, however, the basic structure is the basis for the first > option that I will suggest. > > > Option 1 uses a new authentication scheme. A request that causes a server > to require a client certificate is responded to with a 4xx response > containing a ClientCertificate challenge. That challenge includes an > identifier. The server also sends - at the TLS layer - a > CertificateRequest containing the same identifier, allowing the client to > correlate it's HTTP request with the server's CertificateRequest. > > Client@HTTP/2: > HEADERS > :method = GET ... > > Server@HTTP/2: > HEADERS > :status = 401 > authorization = ClientCertificate req="option 1" > > Server@TLS: > CertificateRequest { id: "option 1" } > > Client@TLS: > Certificate+CertificateVerify { id: "option 1", certificates... } > > Client@HTTP/2: > HEADERS > :method = GET ... > > Server@HTTP/2: > HEADERS > :status = 200 > > > Option 2 aims to more closely replicate the experience we get from > renegotiation in HTTP/1.1 + TLS <= 1.2. Rather than rejecting the request, > the server sends an HTTP/2 frame on the stream to indicate to the client to > expect a CertificateRequest. That frame includes the identifier. > > Client@HTTP/2: > HEADERS > :method = GET ... > > Server@HTTP/2: > EXPECT_AUTH > id = option 2 > > Server@TLS: > CertificateRequest { id: "option 2" } > > Client@TLS: > Certificate+CertificateVerify { id: "option 2", certificates... } > > Server@HTTP/2: > HEADERS > :status = 200 > > In this case, the server probably wants to know that the client is willing > to respond to these requests, otherwise it will want to use > HTTP_1_1_REQUIRED or 421. So a companion setting to enable this is a good > idea (the semantics of the setting that Microsoft use for renegotiation is > pretty much exactly what we'd need). > > I think that the first option has some architectural advantages, but that > is all. The latter more closely replicates what people do today and for > that reason, I think that it is the best option. > > > As for how to implement this same basic mechanism in TLS 1.2, I have an > idea that will work for either option, but it's a bit disgusting, so I'll > save that for a follow-up email. >
Received on Thursday, 22 October 2015 18:12:09 UTC