Re: Call for Adoption: Secondary Certificate Authentication in HTTP/2

On Mon, Jul 04, 2016 at 03:48:24PM -0700, Eric Rescorla wrote:
> Document: draft-bishop-httpbis-http2-additional-certs-01
> In each of these cases, it seems like there are two phases:
> 1. The relying party indicates the identity it would like the
>    peer to authenticate for.
> 2. The authenticating party supplies a certificate and proves
>    possession of the private key corresponding to the certificate.
> The second of these seems pretty similar for both use cases, but the
> first actually is rather different and seems rather shoe-horned in in
> an attempt to make it symmetrical. I would suggest instead that we
> provide different mechanisms for the "certificate request" phase that
> more closely track the TLS 1.3 mechanisms. Specifically:

Or if one wanted to really hack things, one could put the server
name in client request as certificate extension of SubjectAlternativeName
> - When requesting that the server authenticate for a new origin,
>   the client should supply a new domain name.
> - When requesting that the client authenticate for a new certificate
>   the server should supply a CertificateRequest which indicates
>   detailed certificate properties (this is more or less what the
>   draft does).
> I think that this would be clearer and provide a better match for the
> use case.

I think one needs to also sign and MAC over any implicit parameters
that are shared over multiple authentications. E.g. Supported end-
certificate signature algorithms.
> I'd also like to understand the relationship of this mechanism
> with TOKBIND, as it seems that it has extremely similar properties,
> albeit being restricted to H2.

Not really:

TOKBIND is intended to restrict usage of cookies and other similar
resources. It is not intended to convey any sort of authority, and
using it for that purpose is highly dangerous. Whereas this mechanism
is explicitly to convey authority.

> S 1.1.
> I'm a bit skeptical of this encrypted SNI use case, though I suppose
> it might work in some settings. As DKG has suggested, there are applications
> of encrypted SNI where you want the gateway not to be able to see the
> plaintext.

My understanding is that "encrypted SNI" usecase is that here it is
intededed to be used when both the "real" resources and "decoy"
resources are hosted on the same server.

One could do "gateway" already (albeit at cost of double encryption
as is fundamental to riding on top of TLS) using CONNECT (at least
for low-level implementation).

> S 2.
> This AUTOMATIC_USE thing seems very dangerous, as indicated in S 5.  I
> would suggest instead that servers always have this semantic (you
> require this anyway in S 3.5) and that clients never have this
> semantic. In addition, I would forbid ambient authentication with
> any certificate (including one established at the TLS layer) once
> the client has authenticated with a certificate at the HTTP layer,
> and reserve some indicator for the certificates established in TLS.

Or better yet, subject any TLS certs to same control mechanism once
flag indicating support for this is flipped.
Such control mechanism is needed for client side anyway. I'm not sure
it is needed for server side, since server always designates its

> S 3.3.
> It's pretty odd that you allow servers to take a position on which
> extensions should be in certs but not on what signature algorithms
> should be used to sign them. Also, as noted above, all the fields here
> are useless in the server->client authentication case (it's not like
> you're going to provide the whole trust anchor list). You should
> use a different format here for server and client, because most of
> the signaling here in client->server is cruft.

If one wanted symmetry, having empty trust anchor list (which means
"implicit") wouldn't be that bad. Yeah, the whole browser list is far
too big to send.
> What is a peer supposed to do if it receies a request that it
> thinks it has already satisfied (e.g., a duplicate SAN?).

I think signaling no certificate would be reasonable.

However, if done that way, one has to be careful to explicitly
specify how certificate sets behave on resumption (e.g. clear all).

> S 3.5.
> Signing the same value with every CERTIFICATE_PROOF seems like it's
> really living on the edge. Minimally, it seems like you have a
> reflection attack where the client is able to replay the server's
> CERTIFICATE_PROOF back to it. I would recommend that:
> - You sign over the certificate and the RequestID (I don't have a
>   concrete attack but it just seems like an abundance of caution).
>   You could just stuff it in the context parameter.
> - Have the client and server exporters be different to avoid reflection.
> - Also, nothing wrong with 64 bytes, but it seems a bit long, no?

I thought client and server already use different contexts to prevent
reflection (but maybe not, I think they should)?

RequestID/Context could be a counter?

> S 6.1.
> A two byte bitfield indicating which algorithms you support seems
> like premature optimization. It's easy to see how this gets to be
> half-full, just with algorithms that we know of today (4Q, digest
> signatures, some sort of lattice-based post-quantum thing). I would
> think at least 32-bit field would be wise.

Also note that if you have fields like this, I think you would need to
sign and MAC over their values, even if not actually part of every

Also, I got an idea of sending the certificate on non-0 stream,
opened for the purpose. However, this might hit nasty implementability
problems (the original HTTP/2 spec is not careful with schemes, e.g.
doesn't require stream error on unknown scheme) and also could possibly
cause deadlocks. At least one could send clear stream errors, and not
mess with frame types.


Received on Sunday, 24 July 2016 10:35:11 UTC