- From: Cory Benfield <cory@lukasa.co.uk>
- Date: Thu, 18 Sep 2014 09:33:15 +0100
- To: Greg Wilkins <gregw@intalio.com>
- Cc: Stuart Douglas <stuart.w.douglas@gmail.com>, Martin Thomson <martin.thomson@gmail.com>, Brian Smith <brian@briansmith.org>, Ilari Liusvaara <ilari.liusvaara@elisanet.fi>, HTTP Working Group <ietf-http-wg@w3.org>
Greg, I've rearranged your post below because I found it flowed weirdly. On 18 September 2014 06:21, Greg Wilkins <gregw@intalio.com> wrote: > I don't actually agree with the premise that browser vendors are that > morally corrupt. I think they have a difficult line to walk between > offering reasonable security and wide spread connectivity. If they are > offering old ciphers then I am sure that there are significant origin > resources that can only be accessed with them. Alright, this makes sense. We agree on the original premise, but let me state it outright: all user-agents are strongly incentivized to make as many connections as they possibly can. Being unable to provide a resource to a user almost always costs you that user. Users rarely pick a browser (e.g. Chrome), try to connect to a resource and find that it fails where another browser (say IE) succeeded, and then say "Thankyou Chrome for protecting me from myself!" They say "Chrome sucks and doesn't work". This is the exact same reason Microsoft have long had a team whose sole purpose is to specially code bugs *back in* to new Windows versions to prevent programs breaking. Most users will never thank a user-agent for making them more secure, but they will always blame a user-agent for refusing to fetch information for them. If there is a competitor user-agent that makes them less secure but gives them access to a resource they want, they'll just switch to that. Given that most browsers are not charities, it cannot be a surprise that they aim to increase their market share. Everyone is very nice in this forum because civility is good and the world is made better by us all getting a bite at the apple, but don't for a moment think that we're not all in some form of competition. > So he is saying that because browser vendor value market share more than > their users security it is apparently our job to withhold the h2 protocol or > just fail to connect in an effort to push them towards being good web > citizens. No, this responsibility goes both ways. I, a user-agent provider, am just as responsible as you to ensure good web citizenship. If hyper connects to Jetty, does the ALPN handshake, and then finds a block cipher has been negotiated, I am just as responsible as you for tearing that connection down. The correct statement here is that "good web citizens" are responsible for holding "bad web citizens" to account. I am confident that browsers will abide by the requirement in the draft spec to tear down h2 connections to servers that don't negotiate secure ciphers per 9.2.2. Everyone on this list will be working together on this point (I hope). So the responsibility is not just for you. > But let's accept the premise that browser vendors are indeed morally corrupt > and will deploy insecure ciphers rather than lose market share. Why would anyone accept this premise? Especially with the loaded language. I will accept a related premise: "browser vendors are running businesses, and have an inclination to serve their own financial best interests as well as the interests of their users". I don't assume that *anyone* on this list is morally corrupt: I have seen no evidence for this fact. > Let's also > assume that origin server deployers are also prepared to accept those bad > ciphers and make their content available over them... I have absolutely no > idea how allowing those two to tango over http/1 rather than h2 pushes them > to any better security practises. It doesn't. None of the design goals I have ever seen for h2 include making http/1 more secure. Why would they? And besides, what could the spec possibly say? "If you negotiate h2 with insecure ciphers, tear down the connection and refuse to ever allow connections from that peer again"? If the connection fails, *clearly* a http/1 connection can be made, so any ruling in the h2 spec does nothing to prevent what you just discussed. > Perhaps the failing abysmally to connect > part might be a bit more persuasive, but I expect that is more likely to > make them remove the good ciphers. Yes it would. Again, server vendors want as many clients to be able to connect to them as possible. > If failure to connect is a driver to better cipher policy, then we need not > hobble h2 to achieve that. Instead the browser vendors and server deployers > can simply grow a pair and remove the bad ciphers. Failing to connect is not a driver to good security policy, it's a driver to *bad* security policy. Good security policy refuses connections, bad security policy accepts them. And there will always be pressure to make connections where others cannot. This is why browsers let you browse to website with expired certs, it's why libraries let you turn off certificate verification, and it's why servers let you say you'll accept the TLS NULL cipher: because if they don't, others will. I don't think anyone in this list is a bad internet citizen because we're all here trying to make it better! We've all got a vested interest in the web being the best it can be. The problem is that us taking the moral high ground leads to users picking up projects that don't take the moral high ground. Absolutism here doesn't help anyone. We have to work with what we've got. We have to be as secure as we can be without driving users to implementations that don't care about security. Greg, I'm genuinely sympathetic to your original complaint. I've had problem with cipher suites as well, and have accepted that the best I can do is fail if the server screwed up. I don't like that approach. But I think the goal of section 9.2.2 is laudable and I'd be loathe to remove it without replacing it with something equally important. Cory
Received on Thursday, 18 September 2014 08:33:42 UTC