RE: Client Certificates - re-opening discussion

We have historically had cases where customers were either legally mandated to use client certificate authentication specifically, or more generally had an IT requirement to use two-factor authentication to access enterprise resources.  I’ll research the details of some of these, and see whether I can share some details to frame this conversation in Yokohama.  Internally, we use it regularly – the certificate lives on a smartcard, the TPM, or was simply issued to the machine when it enrolled for device management.

For us, at least, the “pain” is that we can’t support a legal requirement without falling back to HTTP/1.1 and generating even more round-trips.  Our HTTP/2 investments don’t apply as soon as we’re talking to the auth server.

From: Mike Belshe [mailto:mike@belshe.com]
Sent: Friday, September 18, 2015 11:20 AM
To: Mark Nottingham <mnot@mnot.net>
Cc: Henry Story <henry.story@co-operating.systems>; HTTP Working Group <ietf-http-wg@w3.org>
Subject: Re: Client Certificates - re-opening discussion

In a strange twist of fate I find myself doing a lot of PKI work these days, and I've considered a fair bit about how client-certs might help with some of my application-level needs.

However, just like HTTP's basic-auth, I wonder HTTP or TLS level client-certs will just never be used?  My concern, of course, is that we build something that has a user experience similar to HTTP's basic-auth.  It's so bad that nobody can use it and authentication gets pulled into web pages (where ironically, it is less secure!).

Mark - you said there is "pain".  Is there a set of use cases to be solved here?  Let me know if I missed them - I may be able to contribute.

My suspicion is that we really need crypto features moved up a level from the protocol, as it will be very difficult to make satisfactory user interfaces from the protocol level alone.  Perhaps for machine-to-machine auth it would be okay.

Mike





On Fri, Sep 18, 2015 at 10:05 AM, Mark Nottingham <mnot@mnot.net<mailto:mnot@mnot.net>> wrote:
Hi Henry,

Thanks, but this is a much more narrowly-scoped discussion -- how to make client certs as they currently operate work in HTTP/2. At most, I think we'd be talking about incrementally improving client certs (e.g., clarifying / optimising the scope of their applicability -- and that really just is an example, not a statement of intent).

Cheers,


> On 18 Sep 2015, at 11:53 am, Henry Story <henry.story@co-operating.systems<mailto:henry.story@co-operating.systems>> wrote:
>
>
>> On 17 Sep 2015, at 23:10, Mark Nottingham <mnot@mnot.net<mailto:mnot@mnot.net>> wrote:
>>
>> Hi,
>>
>> We've talked about client certificates in HTTP/2 (and elsewhere) for a while, but the discussion has stalled.
>>
>> I've heard from numerous places that this is causing Pain. So, I'd like to devote a chunk of our time in Yokohama to discussing this.
>>
>> If you have a proposal or thoughts that might become a proposal in this area, please brush it off and be prepared. Of course, we can discuss on-list in the meantime.
>>
>> Cheers,
>>
>> --
>> Mark Nottingham   https://www.mnot.net/

>
>
> Apart from the proposals as the proposal by Martin Thomson
> and the follow up work  referenced earlier in this thread
> by Mike Bishop [1], I'd like to mention more HTTP centric
> prototypes which would rely perhaps not so much on certificates,
> but on linked public keys, that build on existing HTTP
> mechanisms such as WWW-Authenticate, which if they pass security
> scrutiny would fit nicely it seems to me with HTTP/2.0 .
>
> • Andrei Sambra's first sketch authentication protocol
>   https://github.com/solid/solid-spec#webid-rsa

>
> • Manu Sporny's more fully fleshed out HTTP Message signature
>   https://tools.ietf.org/html/draft-cavage-http-signatures-04

>
> These and the more TLS centric protocols require the user
> agent to be able to use public/private keys generated by
> the agent, and signed  or published by that origin, to
> authenticate or sign documents across origins.
>
> This is where one often runs into the Same Origin Policy (SOP)
> stone wall. There was an important discussion on
> public-webappsec@w3.org<mailto:public-webappsec@w3.org> [1] and public-web-security@w3.org<mailto:public-web-security@w3.org>
> entitled
>
>   "A Somewhat Critical View of SOP (Same Origin Policy)" [2]
>
> that I think has helped clarify the distinction between Same Origin
> Policy, Linkability, Privacy and User Control, and which I hope
> has helped show that this policy cannot be applied without
> care nor can it apply everywhere.
>
> The arguments developed there should be helpful in opening discussion
> here and elswhere too I think. In a couple of e-mails  in that
> thread, I went into great detail showing how SOP, linkability and User
> Control and privacy apply in very different ways to 4 technologies:
> Cookies, FIDO, JS Crypto API and client certificates [3]. This shows
> that the concepts don't overlap, two being technical and the two
> legal/philosophical, each technology enabling some aspect of the
> other, and not always the way one would expect.
>
> Having made those conceptual distinctions I think the path to
> acceptance of solutions proposed by this group will be much eased.
>
> Looking forward to following and testing work developed here,
>
> All the best,
>
>       Henry
>
>
> [1] • starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html

>    • most recent by Mike Bishop
>    https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html

> [2] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/

> [3] https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0101.html

>  which is in part summarised with respect to FIDO in a much shorter
>  email
>    https://lists.w3.org/Archives/Public/public-webappsec/2015Sep/0119.html

>
--
Mark Nottingham   https://www.mnot.net/

Received on Friday, 18 September 2015 18:32:14 UTC