Re: HTTP/2 and Pervasive Monitoring

In message <>, Mark Nottingham wri
>On 15 Aug 2014, at 7:16 pm, Poul-Henning Kamp <> wrote:
>> Straw-man:
>> ----------
>> 	http:/ can use TLS with *arbitrarily weak* crypto algorithms,
>> 	and no authentication, and it is treated *exactly* like
>> 	HTTP/1.1 plaintext by browsers.
>> 	https:/ uses authenticated TLS with strong crypto, as today,
>> 	and indicates this with the well-known changes in browser
>> 	behaviour.
>It sounds like you're proposing that we allow weaker ciphersuites for 
>the Opp-Sec draft. 

It's not really a proposal relative to any other document than BCP188
in the context of HTTP.

>If Opp-Sec traffic is able to be distinguished (e.g., by using a 
>different ciphersuite), it'll be possible for an active attacker to 
>selectively MITM it and not be detected. 

I'm afraid that you just proved one of my points with respect to
how hard a sell this might be, because people don't understand
herd immunity  :-)

Let me try to explain it another way:

Today the majority of PM has the form of a passive optical splitter,
tcpdump and postanalysis.  Given the "take" it brings, this is dirt
cheap to implement.

Currently, they can run a filter which is essentially:

	tcpdump -i all0 -w - | egrep -i "terrorist|bomb"

and the cost is way less than they spend on toilet-paper.

By by whitening the present HTTP plaintext traffic with TLS, even
with quite weak cipher-suites, we dramatically increase the cost
of the postanalysis step, instantly making that filter impossible.

Depending on the cryptographic cost we impose with our whitening,
a number of avenues remain open for the attackers:

1. Brute-force only a tiny fraction of traffic, most likely guided
   by metadata analysis.

   Chances are that the tiny fraction they'll be most interested
   in uses "real" https in the first place.

2. MITM a tiny fraction of traffic, most likely guided by metadata

   What they already have to do for "real" https.

The crucial result here, is that we eliminated the "pervasive" from
PM with respect to HTTP data, by putting north of 99.99% of all
current plaintext traffic out of their economic reach, traffic which
they currently get to see for absolutely free.

Your concern about them being able to tell "real" TLS from "phony" TLS,
for the purpose of MITM, at best comes in as a distant second or third
order effect relative to this proposal: its the step from tcpdump
to MITM were their cost explodes.

It doesn't really matter that they can instantly tell what is "phony"
TLS, and what is "real" TLS, the point is that they cannot get at
either of them without working actively and hard for it, and therefore
they will be forced to focus on and target only suspect traffic.

"working hard" will in most cases be MITM, which is a victory in
itself, because MITM has non-zero probability of detection, where
passive collection has zero probability.

So don't they just switch from pervasive monitoring to pervasive MITM ?

In theory they could, but in practice not.

Technologically, it is extremely doubtful if anybody is ever going
to be able to do anything like moderately well-hidden MITM TLS on
a full 40Gb/s fiber.  You'd have to do it blantantly, as in "The
Great Firewall of Elbonia" style brutalism.

Even though 10Gb/s might be technologically feasible, the economics
ensure that it will never happen on a scale anywhere close to the
current pervasive passive slurp, because both the cost of access
to fibres, cost of equipment and legal complications crop up.

To summ up:  It doesn't matter that they can instantly see it is
"phony" TLS, they still have to work much harder to get at it.

Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.

Received on Friday, 15 August 2014 12:35:14 UTC