Re: 9.2.2 Cipher fallback and FF<->Jetty interop problem


I accept that 'morally corrupt' was an overly flippant way to describe
commercial competition between implementations that may drive less than
secure choices (and again I don't think browser vendors are doing a
particular bad job on that front - it was a hypothetical acceptance of the

But competition also exists between servers and protocols.  If using a
server that provides h2 or switching h2 on results in connection failures
then users are going to use other servers or switch off h2.  That is hardly
an incentive for h2 adoption.

You suggest that jetty is going to be a bad web citizen by not implementing
9.2.2.  That is not the case, I very much would like to implement 9.2.2 and
but I cannot see how I can do that in a future proof and/or portable way.
Jetty runs on every thing from mobile pones to ancient main frames, so I
have a wide range of platforms that I need to be concerned about.   For now
I'm probably going to go with some hard coded regex that will approximate
the restriction for todays popular ciphers, but that's just a timebomb that
will wait until millions of implementations are deployed with differing
implementations of 9.2.2 and then a new cipher will detonate the bomb!

If it is really necessary to discriminate between which ciphers are
acceptable for which protocols, then ALPN needs to be enhanced so the
client can communicate it's list of h2 acceptable ciphers to the server, as
even the best faith attempt of the server to implement 9.2.2 will
eventually guess wrong and connection failure will occur.    9.2.2 may be
laudable, but it is unimplementable in any robust future proof way.  At the
very least it needs to specify a proper fallback/retry mechanism for if the
client and server disagree on acceptable ciphers.

I have still not heard how a new cipher like my hypothetical XYZ could be
deployed into a web where the current 9.2.2 is widely implemented.


On 18 September 2014 18:33, Cory Benfield <> wrote:

> Greg, I've rearranged your post below because I found it flowed weirdly.
> On 18 September 2014 06:21, Greg Wilkins <> wrote:
> > I don't actually agree with the premise that browser vendors are that
> > morally corrupt.  I think they have a difficult line to walk between
> > offering reasonable security and wide spread connectivity.       If they
> are
> > offering old ciphers then I am sure that there are significant origin
> > resources that can only be accessed with them.
> Alright, this makes sense. We agree on the original premise, but let
> me state it outright: all user-agents are strongly incentivized to
> make as many connections as they possibly can. Being unable to provide
> a resource to a user almost always costs you that user. Users rarely
> pick a browser (e.g. Chrome), try to connect to a resource and find
> that it fails where another browser (say IE) succeeded, and then say
> "Thankyou Chrome for protecting me from myself!" They say "Chrome
> sucks and doesn't work". This is the exact same reason Microsoft have
> long had a team whose sole purpose is to specially code bugs *back in*
> to new Windows versions to prevent programs breaking.
> Most users will never thank a user-agent for making them more secure,
> but they will always blame a user-agent for refusing to fetch
> information for them. If there is a competitor user-agent that makes
> them less secure but gives them access to a resource they want,
> they'll just switch to that. Given that most browsers are not
> charities, it cannot be a surprise that they aim to increase their
> market share. Everyone is very nice in this forum because civility is
> good and the world is made better by us all getting a bite at the
> apple, but don't for a moment think that we're not all in some form of
> competition.
> > So he is saying that because browser vendor value market share more than
> > their users security it is apparently our job to withhold the h2
> protocol or
> > just fail to connect in an  effort to push them towards being good web
> > citizens.
> No, this responsibility goes both ways. I, a user-agent provider, am
> just as responsible as you to ensure good web citizenship. If hyper
> connects to Jetty, does the ALPN handshake, and then finds a block
> cipher has been negotiated, I am just as responsible as you for
> tearing that connection down. The correct statement here is that "good
> web citizens" are responsible for holding "bad web citizens" to
> account.
> I am confident that browsers will abide by the requirement in the
> draft spec to tear down h2 connections to servers that don't negotiate
> secure ciphers per 9.2.2. Everyone on this list will be working
> together on this point (I hope). So the responsibility is not just for
> you.
> > But let's accept the premise that browser vendors are indeed morally
> corrupt
> > and will deploy insecure ciphers rather than lose market share.
> Why would anyone accept this premise? Especially with the loaded
> language. I will accept a related premise: "browser vendors are
> running businesses, and have an inclination to serve their own
> financial best interests as well as the interests of their users". I
> don't assume that *anyone* on this list is morally corrupt: I have
> seen no evidence for this fact.
> > Let's also
> > assume that origin server deployers are also prepared to accept those bad
> > ciphers and make their content available over them...  I have absolutely
> no
> > idea how allowing those two to tango over http/1 rather than h2 pushes
> them
> > to any better security practises.
> It doesn't. None of the design goals I have ever seen for h2 include
> making http/1 more secure. Why would they? And besides, what could the
> spec possibly say? "If you negotiate h2 with insecure ciphers, tear
> down the connection and refuse to ever allow connections from that
> peer again"? If the connection fails, *clearly* a http/1 connection
> can be made, so any ruling in the h2 spec does nothing to prevent what
> you just discussed.
> > Perhaps the failing abysmally to connect
> > part might be a bit more persuasive, but I expect that is more likely to
> > make them remove the good ciphers.
> Yes it would. Again, server vendors want as many clients to be able to
> connect to them as possible.
> > If failure to connect is a driver to better cipher policy, then  we need
> not
> > hobble h2 to achieve that.  Instead the browser vendors and server
> deployers
> > can simply grow a pair and remove the bad ciphers.
> Failing to connect is not a driver to good security policy, it's a
> driver to *bad* security policy. Good security policy refuses
> connections, bad security policy accepts them. And there will always
> be pressure to make connections where others cannot. This is why
> browsers let you browse to website with expired certs, it's why
> libraries let you turn off certificate verification, and it's why
> servers let you say you'll accept the TLS NULL cipher: because if they
> don't, others will.
> I don't think anyone in this list is a bad internet citizen because
> we're all here trying to make it better! We've all got a vested
> interest in the web being the best it can be. The problem is that us
> taking the moral high ground leads to users picking up projects that
> don't take the moral high ground. Absolutism here doesn't help anyone.
> We have to work with what we've got. We have to be as secure as we can
> be without driving users to implementations that don't care about
> security.
> Greg, I'm genuinely sympathetic to your original complaint. I've had
> problem with cipher suites as well, and have accepted that the best I
> can do is fail if the server screwed up. I don't like that approach.
> But I think the goal of section 9.2.2 is laudable and I'd be loathe to
> remove it without replacing it with something equally important.
> Cory

Greg Wilkins <> HTTP, SPDY, Websocket server and client that scales  advice and support for jetty and cometd.

Received on Thursday, 18 September 2014 09:44:37 UTC