Re: SSL/TLS everywhere fail

> On 7 Dec 2015, at 10:53, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote:
> 
> --------
> In message <51A9584D-0F29-484A-AAC5-75C46D35658F@lukasa.co.uk>, Cory Benfield writes:
> 
>> I ask these questions only because you used the word 'simple'.
>> The header itself (as in, the bytes on the wire) may be simple, but
>> the technological underpinnings of this approach are *not* simple, at
>> least as far as I can see. The best we have right now is a current I-D
>> that aims to address exactly this,
>> draft-thomson-http-content-signature[0], and that draft suffers from the
>> absurd flaw that the signing public key is transmitted in
>> unauthenticated cleartext right alongside the signature itself.
> 
> I am not sure I understand why you consider that an "absurd flaw"
> and I have not been able to find any mail-discussion where such
> a critique is raised.
> 
> Can you summarize the argument ?

To try to keep this discussion vaguely followable I’m actually going to compress your mails into one, Poul-Henning. Hope that’s not a problem. =)

The reason I characterise this as an "absurd flaw" is because it makes it trivial to mount an attack that replaces the content. For example, I run an API server that signs its content. We can do key distribution in one of three ways:

1. Ahead-of-time and out-of-band. For example, I could place the key on my API documentation website. This works, but I need to secure that website (TLS again!) to avoid manipulation of the key data.
2. TOFU: when you first login to my API I provide you, in-band, with the key I will subsequently use to sign all content.
3. I provide the signing key on all messages.

Both (2) and (3) make it possible for an attacker to listen to my API and rewrite those messages to provide new keys. This then allows the attacker to fool my user-agent into believing that the integrity of the message from the source is valid, where in fact the message has only been unaltered *since the attacker altered it*. In any communication, if the eavesdropper possesses a key that I will accept as valid for that communication, the data is essentially unprotected. This is, of course, how TLS MITM works.

In HTTPS we’ve started using HPKP to prevent this problem: that involves us using option (2). Option (2) is definitely better than Option (3), don’t get me wrong, but it’s still trivial for a pervasive attacker to simply replace all keys distributed by an authority. This is why Google distributes a HPKP preload list: to make it harder to attack the TOFU mechanism. We also require that HPKP headers only be emitted on TLS’d connections, which further increases the cost of pervasively editing the HPKP mechanism. It’s not perfect, but it’s a damn sight better than what the current draft requires.

As a result, I consider it extremely important that any implementation we have either be TOFU and have a normative requirement to cache that key indefinitely, or that it use some out-of-band key distribution mechanism. Otherwise, all an attacker needs to do is sit on the connection and replace both the body *and* the key: a relatively trivial thing to do.

> As in you get a bogus body and there is no signature ?

No, as in you get a bogus body and a valid signature for that bogus body, just signed by a different key than the one that was originally used to sign the body.

> I think I'd lock that down with DNSSEC/DANE providing the information that all HTTP under this domain must be signed with a particuar cert.

This is an answer to my question. What I wanted was a key distribution mechanism that provides integrity and associates keys with specific domains. DNSSEC/DANE is a perfectly valid answer. I don’t have a deep understanding of DANE myself, but it certainly strikes me as being appropriate, though I’d want to consult with others before I was convinced of that. If draft-thomson suggested DANE as a mechanism for key distribution, or indeed required DANE but allowed others, I would become much warmer towards the draft, and I’d certainly be open to seeing how implementing it goes.

For that matter, using DANE for the encryption draft as well seems sensible.

> As for the CA thing:  My distrust is with the content of the default root-cert lists shipped, not with the protocol mechanisms.

I admire your positivity towards X509, which I hate with a burning passion. That said, I take the point, and I believe that using DANE would address that concern for both of us.

To sum up: I think this draft is a good idea, but I think we should get ahead of this now and specify either at least one mandatory key distribution mechanism, or specify the properties such a key distribution mechanism must have. DNSSEC/DANE has been proposed, and I think it makes a good model for a useful system.

Now someone, please tell me why I’m wrong.

Cory

Received on Monday, 7 December 2015 11:50:55 UTC