- From: Martin Thomson <martin.thomson@gmail.com>
- Date: Fri, 5 Sep 2014 22:03:35 -0700
- To: Greg Wilkins <gregw@intalio.com>
- Cc: Patrick McManus <mcmanus@ducksong.com>, HTTP Working Group <ietf-http-wg@w3.org>
On 5 September 2014 21:34, Greg Wilkins <gregw@intalio.com> wrote: > My concern is that implementations which comply with the relevant RFCs can > be put > together and not work. There should be no need of fiddling functionality > outside of the > scope of the RFCs to make them work. The extent of the fiddling is either zero (this just works in NSS and OpenSSL to my knowledge) or - in things like the JDK - setting a preference order that favours valid suites over invalid ones. It's not hard. If you have ALPN, then you probably already have what you need to interoperate in just about every TLS implementation I've seen. You just need to know how to influence suite selection. > Client and server already negotiate the most preferable common cipher. How > does > adding protocol to that already accepted selection mechanism influence the > deprecation of poor ciphers in any way? Preferable is subjective. We've seen that demonstrated many times where servers pick RC4 over better ciphers because ...well, I can only speculate. > If you want to stop old servers using poor ciphers, then the clients should > stop > offering those ciphers. Do you want to break the web. We are taking a hard line, but it tends to be unpopular: http://it-beta.slashdot.org/story/14/09/05/2120246/mozilla-1024-bit-cert-deprecation-leaves-107000-sites-untrusted If clients don't send the "bad" ciphers, then large swathes of the web stop working. No browser wants to do that because that's a trigger for a mass exodus of users. So we end up stuck with ciphers that are sort-of-bad-but-not-broken-enough-to-pull. Which sucks.
Received on Saturday, 6 September 2014 05:04:03 UTC