Re: TLS certificate verification policy?

> Mikael Nordfeldth <>
> 08 November 2014 18:12
> Hi all, I'm the current maintainer of GNU social (formerly StatusNet).
> I figured I'll try to install Diaspora to work out some kinks that are
> making it hard for Diaspora and GNU social to federate, despite very
> similar protocols in use.
> During my installation I found that Diaspora by default requires CA
> validation on HTTPS connections. This requires everyone running Diaspora
> to purchase (or trust StartSSL not to start charging) a TLS certificate
> - and I guess we all know what a fishy and awful business that is. Sites
> are not able to use self-signed certificates or even CAs like
CACert haven't even yet successfully passed the audits required to be a 
CA included in any of the various browsers. While I believe the CACert 
guys are honest, why should anybody trust an organization which is 
unable to pass an audit verifying the secure management of its' trusted 
root certificates?

The CA model is a problematic one, though CACert isn't the answer. 
Techniques like certificate pinning can mitigate the downsides somewhat.
> Relatedly, the XMPP community has recently decided to use a baseline of
> required TLS encryption but _not_ required CA verification. (sidenote:
> this leaves out the already doomed Google Talk from wide XMPP federation
> since Google won't enable server-to-server TLS).
> Diaspora has a reason not to immediately change their default
> configuration, since they _hotlink_ a lot of data such as remote users'
> avatars etc. This would cause many problems for today's web browsers
> since they are following their own CA root certificate databases, giving
> out errors for anything "unverified". (GNU social caches everything
> locally and publishes from the user's already trusted server)
I don't think its reasonable to cache every social object locally - what 
about shared videos, which can be quite sizable? It seems reasonable to 
postulate that hotlinking is a necessity in this case.
> Either way, this got me thinking on whether TLS enforcement of any kind
> is within the scope of this working group when working out a protocol
> and deciding on security models.
> Unfortunately, WebFinger (RFC7033) was standardised with enforced HTTPS
> + CA verification (without referencing a list of trusted CAs, thus
> ensuring total chaos in which trust chains to use). That's something to
> be consider if WebFinger becomes part of a Social Web protocol.
WebFinger does not mandate CA verification. It mandates certificate 
verification. This does not necessarily require CAs as the trust roots.
> Also I have no idea how (or whether at all) the linked data web folks -
> which might be relevant if we're using some LD interface) have any idea
> how to address HTTP vs. HTTPS, given there's no good migration policy.
The ActivityPump draft I have submitted requires everything be served 
over HTTPS. This is a policy I'd encourage - non-secure HTTP should be 
deprecated, especially for potentially private data.
> If the discussion on TLS/HTTPS is within the scope of the working group,
> I suggest we set it as a requirement - but leave out CA verification,
> just like the XMPP community has done and for the same reasons.
I think it is important for us to require HTTPS and validation. We need 
not specify the mechanism of validation.

For the time being, that is likely to be CA based. What we should 
encourage is the adoption of DNSSEC DANE or a similar technology, which 
specifies the server's certificate and enables validation of its' 
authenticity via DNS.

DANE doesn't solve all problems (it will still be time for a trip to a 
CA if you want an EV certificate, for example), but it does solve the 
big one.

Unfortunately, I don't know of DANE being on any browser's near-term 
road map.

     - Owen

     - Owen

Received on Saturday, 8 November 2014 18:35:31 UTC