Re: Required Domain proposal for Additional Certificates

I'm not seeing and disagreement about the assertion that this proposal
solves the issue of key compromise. Let me know if you disagree.

On Sat, Mar 30, 2019 at 10:50 PM Ryan Sleevi <ryan-ietf@sleevi.com> wrote:

>
>
> On Sat, Mar 30, 2019 at 9:10 AM Nick Sullivan <nick@cloudflare.com> wrote:
>
>> Maybe I wasn't clear. I'm not suggesting that UAs should
>> detect misissuance on the client. I am suggesting is that operators can
>> detect misissuance via CT monitoring. The majority of publicly-trusted
>> certificates are already included in a set of CT 'trust anchors' trusted by
>> multiple UAs. Many server operators leverage CT monitoring to detect
>> misissuance on their domains to great effect.
>>
>
> No, you were clear in your suggestion. I am, however, suggesting that it's
> not a meaningful mitigation if you don't also specify the assumptions or
> constraints you're making when you say "via CT". Considering that we're
> discussing adversarial models, the 'majority' could be 99.7% of issued
> certificates, and it would still be woefully inadequate. Similarly, an
> assumption about detectability is making a number of assumptions about not
> just the set of trusted logs, but also the policies and implementation of
> the ecosystem. You cannot suggest it's a meaningful mitigation without
> specifying the assumed properties - and the ecosystem has shown there are a
> variety of properties one COULD obtain, and even in the existing
> deployments, UAs do not all achieve equivalent properties.
>
> I realize get dismissed as a strawman, but I think the inclusion of RFC
> 6962 as a mitigation here is making too many unstated assumptions, and as a
> result, is the technical equivalent of stating a dependency on RFC 3514.
> While I think it is possible to make a more compelling and clarified
> argument, by enumerating those properties, I felt that the claim that
> Required Domain somehow mitigated the second case to be unsupported by the
> actual proposal, and thus was calling it out.
>

I don't think it's necessary to spell out the exact requirements for "via
CT" in this RFC. I would be fine with a bit of text stating that the UA
should have some expectation that domains owners have the ability to detect
misissuance for certificates used as secondary. The decision and manner to
implement would be left to the UA based on their view of the ecosystem and
risks.

The breadcrumb aspect of how secondary certificates need to include the
attacker's domain is a bonus.


> Because the trade-offs involved in revocation checking are judged
>> differently by different UAs, what we can do here is provide reasonable
>> recommendations.
>>
>> It was previously suggested that UAs may choose to require OSCP stapling
>> in order to accept a secondary certificate -- this seems like a reasonable
>> requirement but it leaves the client open to attack for the lifetime of the
>> last valid OCSP response. If the OCSP lifetime is too long for comfort, UAs
>> may also require that an out-of-band mechanism is in place and has been
>> updated within a more reasonable amount of time.
>>
>
> I don't think we can say it's a reasonable requirement, given these
> tradeoffs exist.
>
What I think is more useful, to avoid miring the proposal in policy
> debates, is to instead focus on what properties are desired to be achieved.
> What does a 'reasonable' amount of time look like, compared to the existing
> ecosystem? Is "time" a sufficient replacement for the high visibility that
> would come with, say, a BGP hijack.
>

As mentioned earlier in this thread, DNS poisoning can happen without a
high-visibility BGP attack. An attacker in possession of a misissued
certificate is a big problem right now because the current DNS is insecure.

A different way to frame this would be in terms of the attacker's
cost/benefit of:
- misissuing a current DV certificate and using it maliciously against
any/all clients
- misissuing a Required Domain certificate from a CA that it enforces
stricter validation checks and using the certificate maliciously against
clients who have implemented secondary certificates and the recommended
guidelines we propose in this document

There is clearly some incremental risk of an attacker misissuing a
certificate with a Required Domain that contains their phishing domain, but
I question whether it's a likely vector given the current state of security
of DV certificates.


> Separately, but perhaps just as importantly, this overall analysis it
>>> seems to have overlooked the attack scenario of BygoneSSL (
>>> https://insecure.design/ ). Considering how many CDNs were/are
>>> vulnerable to this, I'm concerned that this solution fails to meaningfully
>>> address that threat model, which is more about "misconfiguration" than
>>> "actively malicious" (although they are admittedly hard to distinguish)
>>>
>>
>> I'm familiar with this issue but not the proposed resolution. Even if
>> certificates were revoked when a customer changes CDNs, some UAs don't
>> check for revocation on the vast majority of certificates. This scenario
>> suggests that for Required Domain certificates, clients should check
>> revocation before accepting them as secondary.
>>
>
> The authors of that research have recommended significantly reduced
> certificate lifetimes as a mitigation, rather than relying on revocation as
> the primary mitigation. This is not a new tradeoff - Dan Geer's "Risk
> Management is Where the Money is" -
> https://cseweb.ucsd.edu/~goguen/courses/275f00/geer.html - looked at a
> number of these tradeoffs, and how the costs of revocation are unequally
> distributed.
>

> However, I don't think we should be so quick to paper over the issues by
> focusing too much on revocation or reduced lifetimes as the way of
> mitigating the second-order risks of CERTIFICATEs. To do so would be to
> ignore that we're at a more fundamental debate about the primacy of the
> domain name system to deliver information versus other systems. This is a
> more fundamental design issue, and BygoneSSL is merely a symptom of the
> incongruous lifetimes (between certificates and the DNS information), which
> the CERTIFICATE frame would exacerbate.
>

This scenario seems functionally equivalent to the compromised certificate
scenario but, instead of a malicious attacker, the party that has access to
the compromised certificate key is a friendly party you already have a
business relationship with.


>
>> The performance savings are 1 connection, 2-RTTs, and a DNS resolution
>> per domain that is coalesced. The savings are front-loaded at the beginning
>> of the HTTP/2 connection, so they have a good chance at improving page
>> loads times for users in real scenarios.
>>
>
> Do you have data to support this? Much like HTTP/2 PUSH, I think we can
> imagine 'idealized' models of how this SHOULD improve performance, but the
> practical reality is often far from it, and hardly realized.
>

We haven't measured the user-visible gains yet, but we hope to collaborate
with a browser vendor to quantify them. I'm optimistic that the performance
gains will be significant given the ubiquity of services like cdnjs and
jsDelivr.


> This is core to not just thinking about the cost to implement such a
> specification (which, as mentioned previously, is significantly higher),
> but also the cost to engage in the specification effort itself, given the
> number of very difficult security tradeoffs that need to be rationalized
> here, with the natural tension between being "as secure as" today (and e.g.
> the noisiness of a BGP hijack) and "not astronomically costly to implement"
> (as it is presently).
>

I don't want to keep harping on this, but DNS poisoning does not require a
noisy attack like a BGP hijack <https://dl.acm.org/citation.cfm?id=3278516>.
Also, the proposals made here are not astronomically costly to implement,
they're modest steps in the direction the PKI is already going. I see the
recommendations we make in this document as a forcing function in the
direction of more secure web PKI practices.


>
>> On the server side, each coalesced domain results in one fewer live
>> connection and one fewer handshake. For servers that handle thousands of
>> simultaneous connections, no longer having to do redundant handshakes and
>> maintain redundant TLS connections can result in substantial efficiency
>> gains.
>>
>
> Isn't this largely a consequence of the increased centralization of those
> servers? That is, it seems the cost is born by client implementations -
> which I think extends far beyond merely contemplating UAs, unless this is
> truly meant to be something "only browsers do, because if anyone else does
> it, it's not safe/useful" - in order to accommodate greater centralization
> by the servers. I ask, because given the costs, a more fruitful technical
> discussion may be exploring how to reduce or eliminate that centralization,
> as an alternative way of reducing that connection overhead.
>

What do you mean by centralization? In the case that I laid out, JavaScript
subresources, if a web server can serve code from a subresource on an
established connection without checking DNS, that means multiple servers
providers can serve the content. This allows content hosts to diversify the
set of TLS terminating proxies they can use simultaneously without using
hacks like the multi-CDN CNAME chain configuration that is causing issues
for the specification of ESNI right now.


>
>> The privacy benefits are also quite clear. DNS queries leak the hostname
>> of the subresource to the network (or the resolver in the case the UA is
>> using DoT or DoH). Not doing a second HTTPS connection prevents the SNI
>> from being leaked to the network in the absence of ESNI.
>>
>
> I find these three sentences actually highlighting why the privacy
> benefits are not clear. ESNI has far fewer of these sharper edges, so does
> that mean it would be more fruitful and useful to focus on improving that
> adoption, rather than adopting a "try everything" approach? Similarly, the
> discussion of DoT/DoH seems to have considered a degree of trust in the
> resolver - considering that some of them are proposing to be audited
> against privacy policies. I totally understand that the best solution to
> privacy is a technology solution, where possible - hence things like ESNI -
> but I do want to highlight that the fairly significant and substantial
> costs here, which haven't really been articulated due to the hidden
> assumptions make it difficult to see this as necessarily progress.
>
>
I disagree that ESNI has fewer sharp edges. ESNI relies on DNS, and
specifically encrypted DNS. Deploying encrypted DNS at scale comes with its
own difficult policy questions and is years away from any sort of ubiquity.

This proposal acheives a meaningful step towards the "eliminate plaintext
hostnames on the network" goal for web sites. Subresources that are
available from a server that has already been connected to don't have their
metadata leaked. Unlike ESNI, it doesn't have to dabble in the complexities
surrounding putting data in DNS.

If there are hidden assumptions, we should reveal them and add them as text
in the security considerations.


> I think it'd be much more fruitful to focus on what the properties are,
> rather than attempting to iterate on technical solutions with hidden
> assumptions or dependencies. I suppose we'd treat that as a "problem
> statement" in IETF land, much like we might pose it as an "Explainer"
> within groups like WHATWG, which try to set out and articulate the problems
> and need for a solution, and then iterate on the many assumptions that may
> be hidden beneath those statements.
>

>
>> While I understand and appreciate the comparison to ORIGIN, its worth
>>> nothing that there seems to be limited industry uptake of the ORIGIN frame,
>>> in part due to the considerable security risks. Perhaps this is a signal
>>> that the calculus was wrong there as well?
>>>
>>
>> Correct me if I'm wrong, but don't both Firefox and Edge skip DNS
>> validation when connecting to domains on the SAN list of the primary
>> certificated in the presence of ORIGIN?
>>
>
> I was thinking of
> https://tools.ietf.org/html/draft-bishop-httpbis-origin-fed-up in
> particular. I don't know what Firefox or Edge are doing in this regard, but
> I haven't heard much support from other HTTP/2 implementations deploying
> it. Do you know if curl or wget have deployed this?
>

I don't see how the implementation of ORIGIN by command line tools is
relevant. Browsers are more likely to benefit from connection coalescing in
ways that impact user experience than command-line tools used for bulk
downloads in which shaving off RTTs is less important.

Received on Tuesday, 2 April 2019 04:24:33 UTC