What HTTP/2.0 should mandate instead

Instead of mandating a particular security approach, HTTP should
mandate client support for mechanisms that protect against a downgrade
attack on first contact. There are two distinct parts to this problem:

1) How does a site announce that it supports encryption
2) How does the client access that information

The answers to 1 and 2 turn out to be different.

The DNS is the Internet infrastructure for publishing authoritative
assertions about DNS names. It is the mechanism that sites SHOULD use
to publish their security policy (TLS is always offered, required,
cipher strengths, cert constraints etc.).

Unfortunately moving from a position where no sites publish security
policy to one where they all do is pretty hard. In particular most
sites will not publish policy at all and many will publish policy that
is wrong, wrong wrong. And others will publish the policy and then not
update it. The bottom line is that a client really can't expect to
rely on raw, uncurated security policy information. DKIM policy works
fine because it is used as an indicator of a possible problem by
intermediaries that use a lot of other information in addition.

Another big screw factor is that about 2% of the Internet does not
provide access to real DNS. The DNS servers at many locations will
only forward what is necessary to answer A record queries.

The bottom line is that you can't expect security policy to work in
practice if you design to the Internet architecture that you think
people should of deployed rather than the one they did.


Another issue is the fact that 'authoritative' does not mean safe.
Anyone can get a DNS name and/or an SSL cert. EV certificates provide
a measure of accountability but the whole point of accountability is
to deter default through a credible threat of consequences. The
developers of Flame and Stuxnet obviously expected to be
consequence-free.

This fact gives you another reason to want to curate the security
policy information rather than attempt direct enforcement at the
client.


This naturally leads to an architecture where the client connects up
to some sort of service chosen by the user (or their employer in a
closed network) and that service then curates the trust assertions
made by CAs, Web server hosting providers and anywhere else. Kaspersky
and Comodo are already offering products that start along this path as
part of their AV solutions. Expect the other AV vendors to follow at
their own pace.

This architecture solves the problem of security policy statements
being 'dirty' and the DNS service limitation. The client and service
can employ whatever is necessary to get round the local network
restrictions and achieve the best possible service.


But in addition, this architecture would allow clients to up the
proportion of HTTP connections using SSL from the current 2% or so to
at least 20% and possibly as high as 60% or more.

The reason for this is that the curators of the security policy do not
need to rely on authoritative statements from server operators. Most
CAs and AV companies already crawl the Web constantly. The trust
curator can look at their crawler logs and see that TLS is available
at google.com and tell clients to use the https version of the site.
Similarly visitors to fidelity.com can be told to go straight to the
secure site and not go through the redirect.

This means that security can now become the default for the sites that
offer it. Oddly enough, it can also become the default for sites that
do not have an SSL certificate AT ALL.

The reason for this is that a limitation in the design of SSL requires
each SSL server to have its own IPv4 address. This has since been
addressed in TLS with the SNI extension but being able to depend on
SNI support is far off in the future.

As a consequence of this restriction it is possible to use TLS to
connect to about 60% of Web sites that do not have a TLS cert at all!
This particular peculiarity led to a recent nonsense scare paper
claiming that 60% of all SSL certificates were misconfigured, a little
knowledge can be a dangerous thing.

Making use of those certs would simply mean telling the client to use
TLS to secure the connection, ignore the certificate parameters
presented and to not present the padlock icon indicator (since the
endpoint is not authenticated). That approach provides a
confidentiality and integrity protection that is secure against
anything short of an active MITM attack.


I have a draft, the spec is written in JSON and is in two parts, a
connection protocol to perform the key management (which does mandate
support for TLS) and a query protocol that allows the security policy
(and other) queries to be supported. I have reference code in C# (MIT
license) and the generator can be retargeted to C, Java or anything
else you might want.

http://tools.ietf.org/html/draft-hallambaker-omnibroker-01


-- 
Website: http://hallambaker.com/

Received on Wednesday, 18 July 2012 17:25:35 UTC