W3C home > Mailing lists > Public > public-webappsec@w3.org > September 2014

Re: CSP: Minimum cipher strength

From: Hill, Brad <bhill@paypal.com>
Date: Mon, 15 Sep 2014 16:47:06 +0000
To: Austin William Wright <aaa@bzfx.net>
CC: Jim Manico <jim.manico@owasp.org>, Tom Ritter <tom@ritter.vg>, "public-webappsec@w3.org" <public-webappsec@w3.org>
Message-ID: <DDD7DBC4-7E02-46E7-BDD6-637B1438CFBA@paypal.com>
In the end, things like this come down to who has the incentives and who has the leverage to make change happen.

The CA/Browser forum is a great anti-pattern here - an non-public, unaccountable organization that protects its own business interests first.  They were notified privately of the vulnerabilities relating to issuance to private names in 2007, and I began raising the issue in the forum itself in 2011.  5-7 years as a mitigation timeline isn't one anyone should be proud of.  Pretty much the CABF only moves faster when one of the browsers forces them to.

As far as third party resources go, I think the incentives are already in the right place.  If you're a website that's including 3rd party resources, you are better off (and the web is better off) if you use your leverage to get them to upgrade, rather than setting a policy just for your users. (or you can use SRI to identify the exact resource you want, regardless of transport security) 

And the user-agents are in the best position to act in users' interest by ratcheting up the minimum acceptable cipher strengths for the entire web - and web sites will follow their lead as the consequence of not doing so is effectively going dark.  This matches well with the technical architecture of TLS cipher suite negotiation as well.

Given the complexity of the overall ecosystem and the much bigger set of users and use-cases for TLS than just the web, I tend to think that adding yet another knob for another 3rd-party case is probably not worth the effort.  Especially since any cipher breaks that allow compromising the integrity of content will be quickly blacklisted by browsers anyway, and things like BEAST or CRIME style info disclosure for a third party should be of substantially less concern for the first party given the same origin policy.

-Brad

On Sep 15, 2014, at 4:49 AM, Austin William Wright <aaa@bzfx.net> wrote:

> The CA/Browser Forum <https://cabforum.org/> is one of the major organizations driving policy decisions around TLS certificates for TLS clients and Certificate Authorities.
> 
> Perhaps the most visible effect is member Certificate Authorities are not issuing certificates for IP addresses, nor private intranet names, past November 1, 2015.
> 
> The five top Web browser vendors are also members.
> 
> TLS itself, of course, does not and can not make policy decisions like minimum required cipher, though known insecure parameters (like data compression) are being removed with TLS 1.3.
> 
> TLS is, appropriately, designed so policy decisions can evolve rapidly around the technology without necessitating updates to the standard itself.
> 
> The concern of manipulating TLS connections is not a new one, nor one specific to HTTP, and there's much work being done elsewhere that would benefit more applications than just Web services. Problems like a rogue CA can be addressed with RFC 6698 or the proposed Public-Key-Pins HTTP header (in draft); additionally, an attacker can only downgrade a connection to one that both ends are willing to use, so simply disable the insecure ciphers on your server to set a "minimum cipher strength". Specifically, SSL 3.0 should be disabled.
> 
> Is the intent to not load _third party_ resources that fail to meet the requirements? I would be concerned that it offers a false sense of security: Third parties can still become untrusted, become compromised, etc, even without a MITM attack. Simply don't load executable code from third parties, and use CSP to enforce it.
> 
> Austin.
> 
> On Sun, Sep 14, 2014 at 2:39 PM, Jim Manico <jim.manico@owasp.org> wrote:
> > Ultimately, I agree with Mike - the solution to solve this (generally) is for UAs to start deprecating things
> 
> Is there some kind of "rolling standard" that could be set to that all
> UA's are on the same page? This seems fairly arbitrary as it stands
> today...
> 
> --
> Jim Manico
> @Manicode
> (808) 652-3805
> 
> On Sep 13, 2014, at 8:37 PM, Tom Ritter <tom@ritter.vg> wrote:
> 
> >> Ultimately, I agree with Mike - the solution to solve this (generally) is
> >> for UAs to start deprecating things
> 
> 
> 
Received on Monday, 15 September 2014 16:47:36 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:06 UTC