W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2019

Re: Required Domain proposal for Additional Certificates

From: Ryan Sleevi <ryan-ietf@sleevi.com>
Date: Sat, 30 Mar 2019 17:50:32 -0400
Message-ID: <CAErg=HHrWC4cytou-3z4y=1DUUkD5CHcg4+7ZA-yAtPVr+Qwkw@mail.gmail.com>
To: Nick Sullivan <nick@cloudflare.com>
Cc: Ryan Sleevi <ryan-ietf@sleevi.com>, HTTP Working Group <ietf-http-wg@w3.org>
On Sat, Mar 30, 2019 at 9:10 AM Nick Sullivan <nick@cloudflare.com> wrote:

> Maybe I wasn't clear. I'm not suggesting that UAs should
> detect misissuance on the client. I am suggesting is that operators can
> detect misissuance via CT monitoring. The majority of publicly-trusted
> certificates are already included in a set of CT 'trust anchors' trusted by
> multiple UAs. Many server operators leverage CT monitoring to detect
> misissuance on their domains to great effect.
>

No, you were clear in your suggestion. I am, however, suggesting that it's
not a meaningful mitigation if you don't also specify the assumptions or
constraints you're making when you say "via CT". Considering that we're
discussing adversarial models, the 'majority' could be 99.7% of issued
certificates, and it would still be woefully inadequate. Similarly, an
assumption about detectability is making a number of assumptions about not
just the set of trusted logs, but also the policies and implementation of
the ecosystem. You cannot suggest it's a meaningful mitigation without
specifying the assumed properties - and the ecosystem has shown there are a
variety of properties one COULD obtain, and even in the existing
deployments, UAs do not all achieve equivalent properties.

I realize get dismissed as a strawman, but I think the inclusion of RFC
6962 as a mitigation here is making too many unstated assumptions, and as a
result, is the technical equivalent of stating a dependency on RFC 3514.
While I think it is possible to make a more compelling and clarified
argument, by enumerating those properties, I felt that the claim that
Required Domain somehow mitigated the second case to be unsupported by the
actual proposal, and thus was calling it out.

Because the trade-offs involved in revocation checking are judged
> differently by different UAs, what we can do here is provide reasonable
> recommendations.
>
> It was previously suggested that UAs may choose to require OSCP stapling
> in order to accept a secondary certificate -- this seems like a reasonable
> requirement but it leaves the client open to attack for the lifetime of the
> last valid OCSP response. If the OCSP lifetime is too long for comfort, UAs
> may also require that an out-of-band mechanism is in place and has been
> updated within a more reasonable amount of time.
>

I don't think we can say it's a reasonable requirement, given these
tradeoffs exist. What I think is more useful, to avoid miring the proposal
in policy debates, is to instead focus on what properties are desired to be
achieved. What does a 'reasonable' amount of time look like, compared to
the existing ecosystem? Is "time" a sufficient replacement for the high
visibility that would come with, say, a BGP hijack.


> Separately, but perhaps just as importantly, this overall analysis it
>> seems to have overlooked the attack scenario of BygoneSSL (
>> https://insecure.design/ ). Considering how many CDNs were/are
>> vulnerable to this, I'm concerned that this solution fails to meaningfully
>> address that threat model, which is more about "misconfiguration" than
>> "actively malicious" (although they are admittedly hard to distinguish)
>>
>
> I'm familiar with this issue but not the proposed resolution. Even if
> certificates were revoked when a customer changes CDNs, some UAs don't
> check for revocation on the vast majority of certificates. This scenario
> suggests that for Required Domain certificates, clients should check
> revocation before accepting them as secondary.
>

The authors of that research have recommended significantly reduced
certificate lifetimes as a mitigation, rather than relying on revocation as
the primary mitigation. This is not a new tradeoff - Dan Geer's "Risk
Management is Where the Money is" -
https://cseweb.ucsd.edu/~goguen/courses/275f00/geer.html - looked at a
number of these tradeoffs, and how the costs of revocation are unequally
distributed.

However, I don't think we should be so quick to paper over the issues by
focusing too much on revocation or reduced lifetimes as the way of
mitigating the second-order risks of CERTIFICATEs. To do so would be to
ignore that we're at a more fundamental debate about the primacy of the
domain name system to deliver information versus other systems. This is a
more fundamental design issue, and BygoneSSL is merely a symptom of the
incongruous lifetimes (between certificates and the DNS information), which
the CERTIFICATE frame would exacerbate.


> The performance savings are 1 connection, 2-RTTs, and a DNS resolution per
> domain that is coalesced. The savings are front-loaded at the beginning of
> the HTTP/2 connection, so they have a good chance at improving page loads
> times for users in real scenarios.
>

Do you have data to support this? Much like HTTP/2 PUSH, I think we can
imagine 'idealized' models of how this SHOULD improve performance, but the
practical reality is often far from it, and hardly realized. This is core
to not just thinking about the cost to implement such a specification
(which, as mentioned previously, is significantly higher), but also the
cost to engage in the specification effort itself, given the number of very
difficult security tradeoffs that need to be rationalized here, with the
natural tension between being "as secure as" today (and e.g. the noisiness
of a BGP hijack) and "not astronomically costly to implement" (as it is
presently).


> On the server side, each coalesced domain results in one fewer live
> connection and one fewer handshake. For servers that handle thousands of
> simultaneous connections, no longer having to do redundant handshakes and
> maintain redundant TLS connections can result in substantial efficiency
> gains.
>

Isn't this largely a consequence of the increased centralization of those
servers? That is, it seems the cost is born by client implementations -
which I think extends far beyond merely contemplating UAs, unless this is
truly meant to be something "only browsers do, because if anyone else does
it, it's not safe/useful" - in order to accommodate greater centralization
by the servers. I ask, because given the costs, a more fruitful technical
discussion may be exploring how to reduce or eliminate that centralization,
as an alternative way of reducing that connection overhead.


> The privacy benefits are also quite clear. DNS queries leak the hostname
> of the subresource to the network (or the resolver in the case the UA is
> using DoT or DoH). Not doing a second HTTPS connection prevents the SNI
> from being leaked to the network in the absence of ESNI.
>

I find these three sentences actually highlighting why the privacy benefits
are not clear. ESNI has far fewer of these sharper edges, so does that mean
it would be more fruitful and useful to focus on improving that adoption,
rather than adopting a "try everything" approach? Similarly, the discussion
of DoT/DoH seems to have considered a degree of trust in the resolver -
considering that some of them are proposing to be audited against privacy
policies. I totally understand that the best solution to privacy is a
technology solution, where possible - hence things like ESNI - but I do
want to highlight that the fairly significant and substantial costs here,
which haven't really been articulated due to the hidden assumptions make it
difficult to see this as necessarily progress.

I think it'd be much more fruitful to focus on what the properties are,
rather than attempting to iterate on technical solutions with hidden
assumptions or dependencies. I suppose we'd treat that as a "problem
statement" in IETF land, much like we might pose it as an "Explainer"
within groups like WHATWG, which try to set out and articulate the problems
and need for a solution, and then iterate on the many assumptions that may
be hidden beneath those statements.


> While I understand and appreciate the comparison to ORIGIN, its worth
>> nothing that there seems to be limited industry uptake of the ORIGIN frame,
>> in part due to the considerable security risks. Perhaps this is a signal
>> that the calculus was wrong there as well?
>>
>
> Correct me if I'm wrong, but don't both Firefox and Edge skip DNS
> validation when connecting to domains on the SAN list of the primary
> certificated in the presence of ORIGIN?
>

I was thinking of
https://tools.ietf.org/html/draft-bishop-httpbis-origin-fed-up in
particular. I don't know what Firefox or Edge are doing in this regard, but
I haven't heard much support from other HTTP/2 implementations deploying
it. Do you know if curl or wget have deployed this?

>
Received on Saturday, 30 March 2019 21:51:08 UTC

This archive was generated by hypermail 2.3.1 : Saturday, 30 March 2019 21:51:11 UTC