RE: 9.2.2 Cipher fallback and FF<->Jetty interop problem

Martin,

I have to disagree with some of what you are raising here.  We are essentially requiring functionality of the TLS stack interfaces to enabling filtering based on a higher layer protocol.  This is not a wire protocol specification, but a requirement for implementers to add application protocol knowledge into TLS.  It is not the same as 2618 or the upcoming UTA BCP where we specify how to use TLS as documented and implemented, we now have to change how the TLS stack filters crypto suites.

Does this mean that HTTP implementers are only allowed to use TLS platforms that provide direct HTTP/2 filtering?  This does limit HTTP developers and the ecosystem to wait for platforms to catch up, if they ever do.  If nothing else, we should consider taking this section to a separate doc or a BCP to prevent an indirect requirement for TLS implementations.

-Rob

-----Original Message-----
From: Andrei Popov [mailto:Andrei.Popov@microsoft.com] 
Sent: Wednesday, September 24, 2014 5:49 PM
To: Martin Thomson
Cc: ietf-http-wg@w3.org
Subject: RE: 9.2.2 Cipher fallback and FF<->Jetty interop problem

Sorry Martin, I am not inclined to argue about "authority", "entitlements", "demands", or "institutional beliefs":).

I'm merely pointing out that the perception of the security of certain TLS features changes over time. If we can address TLS protocol issues at the TLS layer without modifying application protocols, it's a good thing. HTTP/2 spec requiring specific TLS cipher suites goes against this, because compatible HTTP/2 client and server need to agree on the "acceptable" TLS features, or else they will see INADEQUATE_SECURITY errors.

Cheers,

Andrei

-----Original Message-----
From: Martin Thomson [mailto:martin.thomson@gmail.com]
Sent: Tuesday, September 23, 2014 11:03 PM
To: Andrei Popov
Cc: ietf-http-wg@w3.org
Subject: Re: 9.2.2 Cipher fallback and FF<->Jetty interop problem

Sorry about the long reply.  Too many words have been spent on this topic already, but I think that Andrei's comments go to the core of what is an important issue.

On 23 September 2014 11:00, Andrei Popov <Andrei.Popov@microsoft.com> wrote:
>> The draft permits the TLS stack to complete the negotiation.  That's why we defined an INADEQUATE_SECURITY error code: for cases where the stack proceeds.
>
> How does this help? Basically, if the TLS layer does not filter ALPN IDs, or the HTTP layer does not filter TLS protocol versions/cipher suites/TLS extensions, then connections will sometimes fail with INADEQUATE_SECURITY errors? I don't think a general-purpose Web server can realistically go down this path.

This helps because it allows people who don't have the ability to alter the TLS negotiation process to determine whether things have succeeded.  It's primarily useful for clients though, where the client determines that TLS has completed, but the server chose RC4 rather than something that is actually good.

A server probably shouldn't be doing this, unless it is implemented on top of something like Java 7 and the set of suites that the client offered caused the stack to pick a bad suite.

The idea is to limit what HTTP/2 (and HTTP/1.1 in combination) requires of the TLS stack.  At this point, the following control points are required:

1. set what cipher suites are enabled
2. on the client, set the order that cipher suites are offered 3. determine what cipher suite has been negotiated

If we didn't provide INADEQUATE_SECURITY, then we would require a greater level of control, the likes of which Greg thinks we need maybe.

> I am in full agreement: it is hard to get the world to disable weak ciphers and old TLS versions. Right now addressing a new TLS vulnerability involves:
> 1. Updating TLS RFCs;
> 2. Patching TLS stacks;
> 3. Getting patched TLS stacks deployed.
>
> Requiring certain TLS versions and cipher suites in HTTP RFCs means that now we also need to:
> 4. Update HTTP RFCs when TLS vulnerabilities are discovered; 5. Patch 
> HTTP stacks for TLS vulnerabilities; 6. Get patched HTTP stacks 
> deployed.
>
> So it seems that instead of keeping the problem encapsulated at the TLS layer and figuring out how to better solve it there, we're spreading it to the application layer. I think this would be counter-productive.

To reuse ekr's taxonomy, there are two levels of problem we worry about here:

1. Events where a suite turns out to have a disastrous flaw.  This is a chemical spill situation and we will have to use disaster recovery measures, which might be drastic.  I don't see a particular burden in having to update application usages as well as the core of TLS for a disaster like that.

2. The harder problem is the slow rot of ciphers that are simply in the process of becoming obsolete.  This is what has motivated this set of requirements.  If HTTP/2 is never negotiated with RC4, that wouldn't be a bad thing.


I do want to call you out on the basic argument you are relying on here, which I think is flawed.  You seem to assume that we are not entitled to demand certain features of the TLS stack.  That the TLS stack is the one place that this sort of knowledge is enshrined.  I apologize here if I seem to have misrepresented your argument as an appeal to authority, that's not what I intend.

My interpretation of things here is that an application is able to demand a certain minimum level of protection.  More correctly, I believe that it is the responsibility of an application to make that demand.  I choose to interpret the existence of efforts like UTAas an indication that this is the institutional belief of the IETF, but will let others reach their own conclusions on that.

On the specifics of this matter, I don't think that there is any question that we are entitled to require that packets be encrypted, rather than using a suite like say TLS_RSA_WITH_NULL_SHA, which doesn't encrypt at all.  And as a practical matter, the difference between that and something as weak as TLS_RSA_WITH_DES_CBC_SHA is pretty much academic.  Maybe that distinction matters more to you, but I contend that practicality rules here.

Similarly, I believe that is well within our rights to request a suite that can offer forward secrecy.

If you accept that premise, then I think the only question is where the line is drawn.  The AEAD distinction is merely a convenient delineation point, being new enough to ensure that only relatively modern ciphers are accepted and old ones are not.  It also intentionally aligns with the choices made for TLS 1.3.

An alternative is to exhaustively enumerate the entries from the registry that we don't want, which I am also OK with.  I'm not sure that this changes the set of codecs.  And if we moved the line to permit a block cipher, the question of whether to permit the EtM variant only comes up.

I understand why you might reject this premise, I really do.  And this is a point that we may just end up disagreeing on, even if we can agree about the facts, which have at times been victim to rhetoric in this debate.

I just think that the choices we've made improve security at a holistic level, and that's what I care about most.  If we do nothing, then we've lost an opportunity, and that would be disappointing.

Received on Thursday, 25 September 2014 18:33:01 UTC