Re: Required Domain proposal for Additional Certificates

Hi Ryan,

Thanks for your reply.

On Sat, Mar 30, 2019 at 3:28 AM Ryan Sleevi <ryan-ietf@sleevi.com> wrote:

>
>
> On Fri, Mar 29, 2019 at 9:20 PM Nick Sullivan <nick@cloudflare.com> wrote:
>
>> This is unfortunate because it increases the utility of a misissued
>> certificate to an attacker. However, because the Required Domain is part of
>> the certificate, there are some mitigating factors:
>> - Any misissued certificate can be detected in the certificate
>> transparency logs and revoked.
>>
>
> I'm not sure that this is a meaningful mitigation, and thus the implied
> value it provides may be overstated here.
>
> This is because we need to acknowledge that while Certificate Transparency
> is a technology with a spec that describes the technical implementation
> (RFC 6962), much like RFC 5280 does not dictate how to select 'trust
> anchors', RFC 6962 does not dictate how clients can or should use
> Certificate Transparency. As such, the conclusion here - detection - is
> implicitly assuming certain deployment/policy-level properties that haven't
> been enumerated, in the pull request or in the original secondary certs
> draft.
>
> Further, while CT provides a sound set of cryptographic building blogs to
> enable the detection of, say, split log views, clients themselves need to
> deploy mechanisms to allow that. This is also a policy consideration - in
> that different approaches, such as gossip or proof-stapling - have their
> own set of privacy and performance tradeoffs and assumptions.
>

Maybe I wasn't clear. I'm not suggesting that UAs should
detect misissuance on the client. I am suggesting is that operators can
detect misissuance via CT monitoring. The majority of publicly-trusted
certificates are already included in a set of CT 'trust anchors' trusted by
multiple UAs. Many server operators leverage CT monitoring to detect
misissuance on their domains to great effect.


>
> It also further makes assumptions about the capabilities of clients
> regarding revocation and/or out-of-band delivery mechanisms. Revocation is
> not merely a technology matter, but as shown through the use of revocation
> for censorship purposes, one that has a policy angle that needs to be
> carefully considered. The assumption of mitigation here, being revocation,
> is one that seems to presume certain solutions for this space. Similarly,
> the assumption of out-of-band delivery methods for blocking a site (if
> revocation is slow to propagate) similarly presumes an out-of-band update
> mechanism.
>

Because the trade-offs involved in revocation checking are judged
differently by different UAs, what we can do here is provide reasonable
recommendations.

It was previously suggested that UAs may choose to require OSCP stapling in
order to accept a secondary certificate -- this seems like a reasonable
requirement but it leaves the client open to attack for the lifetime of the
last valid OCSP response. If the OCSP lifetime is too long for comfort, UAs
may also require that an out-of-band mechanism is in place and has been
updated within a more reasonable amount of time.


>
> While it's certainly true that risks - such as on-path adversaries or
> misissued certificates - exist independent of the CERTIFICATE frame, there
> may be a disagreement about the relative cost of the attack compared to the
> value it provides. It strikes me as something similar to the debate about
> SHA-1 vs SHA-256 for signatures. While there are significant similarities -
> they're Merkle-Damgard constructions, they're both crytographci hash
> algorithms - there's also a significant difference in the work-factor/cost
> to exploit, by many orders of magnitude. Similarly, while the 'utility' of
> a SHA-1 certificate also is limited by an attacker's on-path premise, and
> while CT (in an idealized form) may provide detection, I don't think many
> would advocate that SHA-1 is safe to use because of these mitigations.
>

This is a particularly good strawman. Would this still hold if there were
additional issuance requirements for SHA-1 certificates that, if violated,
would result in disqualification of the CA?


> Separately, but perhaps just as importantly, this overall analysis it
> seems to have overlooked the attack scenario of BygoneSSL (
> https://insecure.design/ ). Considering how many CDNs were/are vulnerable
> to this, I'm concerned that this solution fails to meaningfully address
> that threat model, which is more about "misconfiguration" than "actively
> malicious" (although they are admittedly hard to distinguish)
>

I'm familiar with this issue but not the proposed resolution. Even if
certificates were revoked when a customer changes CDNs, some UAs don't
check for revocation on the vast majority of certificates. This scenario
suggests that for Required Domain certificates, clients should check
revocation before accepting them as secondary.


> Given the significant investment in technologies that this proposal would
> seem to necessitate - from implementing and deploying an RFC 6962 solution
> (and associated policy), reliance on unreliable revocation information or
> out of band delivery mechanisms, requiring site operators to add yet
> another source of information to actively monitor (by effectively mandating
> all domains monitor CT to prevent easy MITM), exacerbating the risks of
> situations like BygoneSSL, all CAs implementing yet more validation checks
> - perhaps its worth evaluating the benefit such a solution would provide.
> Based on your description, it seems that the primary goal is saving 1
> connection and 2 RTTs. That does not seem a very cost-effective tradeoff,
> but perhaps I've misunderstood the calculus?
>

The benefits fall into two categories: performance and privacy.

The performance savings are 1 connection, 2-RTTs, and a DNS resolution per
domain that is coalesced. The savings are front-loaded at the beginning of
the HTTP/2 connection, so they have a good chance at improving page loads
times for users in real scenarios. On the server side, each coalesced
domain results in one fewer live connection and one fewer handshake. For
servers that handle thousands of simultaneous connections, no longer having
to do redundant handshakes and maintain redundant TLS connections can
result in substantial efficiency gains.

The privacy benefits are also quite clear. DNS queries leak the hostname of
the subresource to the network (or the resolver in the case the UA is using
DoT or DoH). Not doing a second HTTPS connection prevents the SNI from
being leaked to the network in the absence of ESNI.

As an operator, you still need to make the investments you listed (logging
certs in CT, CT monitoring, relying on revocation, relying on CAs to not
misissue) if you want to prevent your site from being hijacked by "slightly
less easy" MITM. If an attacker obtains a misissued certificate for your
site, then they can hijack it without being on-path or hijacking BGP by
leveraging a DNS Fragmentation attack (
https://blog.powerdns.com/2018/09/10/spoofing-dns-with-fragments/), for
example. The DNS is a very weak a second factor to protect against MITM.
We're in a bad place right now with respect to misissuance, pretending
we're on solid ground by checking DNS is not helpful.

Adding an additional OID whose use encourages better operational practices
by operators and enforces safer validation by CAs while providing multiple
privacy and performance benefits seems to be a net positive.


>
> While I understand and appreciate the comparison to ORIGIN, its worth
> nothing that there seems to be limited industry uptake of the ORIGIN frame,
> in part due to the considerable security risks. Perhaps this is a signal
> that the calculus was wrong there as well?
>

Correct me if I'm wrong, but don't both Firefox and Edge skip DNS
validation when connecting to domains on the SAN list of the primary
certificated in the presence of ORIGIN?

In any case, ORIGIN is scoped to work for any certificate so it carries
with it the baggage of the existing web PKI. This calculus may be different
with new validation requirements. Requiring stronger validation for
certificates that enable new capabilities is a pattern I've seen be
successful for projects like SXG. I'm hopeful that other UAs will chime in
here.

Received on Saturday, 30 March 2019 13:10:40 UTC