RE: report on EV and SSL MITM proxying

On Galileo.

 

Galileo thought about stuff and reasoned. He was also tortured, by the state
of the day, for articulating ideas. That state lives in shame to this day.
Like states founded with slavery, history will never forget.

 

Put any marketing spin on it you like, seeing if a cert exists in a remote
file is not going to qualify as Galilean grade science. First it's not
novel, second it's not original, third its was all done 15+ years ago. All
we are doing is swapping form - one container for another.

 

Now, I think you are on to something when you want to see the web as a giant
computer, one whose atoms can all calculate in RDF'ised Boolean algebra, and
jointly compute to find satisfiable expressions of ever greater complexity,
of ever wider applicability. And, in that sense, one sees why one wants
trust and security - so that contributions to some subweb takes its joint
computation ahead, avoiding contamination by the malicious side of human
nature. The human world of the human-centric web is messy and somewhat
malicious (since it models its creators). The semantic web part is
supposedly going to be more machine like, and thus nice and friendly.
(Hmm... scifi movies about evil robots, notwithstanding)

But, this is rather ideal, and somewhat abstract. At the same time, it's a
nice goal - one that is not typical "security" - and one that provides us
with a wider mission that embodies the semantic web aspects of this project.
This takes us beyond fiddling with old SSL, and even older certs, and get us
back to the spirit of the webid itself.

 

Of course, it would be nice it actually worked, too, for the countless
millions of browsers behind a corporate or ISP firewall that insists on
speaking for them, intermediating always. What I hope we learned this week,
is that for at least 50% of the web population, today, "the MITM firewall"
has immense power to spoof the best that server certs can offer, and will
normally prevent the SSL client authn message reaching its target site with
connection-integrity - which means webid don't really work in practice as we
defined it in the FOAF+SSL era. It only works in the lab, in conditions too
rarified to relate to the real world of SSL, full of nastiness.

 

Which isn't to say that we cannot change the praxis of https/SSL; where
certs (the proxy path) and/or SSL (the multiplexed path) are GREAT ideas to
followup. There is lots of latent power, yet unexploited there. But, I
predict, this endeavor is going to involve as much cryptopolitical skill as
technical design skill. It's just the nature of crypto that, like science,
make it inherently political - something that really hasn't changed much
since Galileo's day.

 

From: Henry Story [mailto:henry.story@bblfish.net] 
Sent: Tuesday, March 08, 2011 10:26 AM
To: peter williams
Cc: public-xg-webid@w3.org
Subject: Re: report on EV and SSL MITM proxying

 

 

On 8 Mar 2011, at 18:58, peter williams wrote:





After 15 years of trying and 10 major vendors, isn't it a bit surprising
that none of them have reduced it to a trivial easy UI? It's so easy!

 

Indeed, it is surprising, and it is easy to improve :-)





 Hardly a logical argument; peter. Far too empirical!

 

So I suppose if Galileo found that the earth turns around the Sun, you would
have argued that this was impossible because for the previous million years
nobody had thought of it.

 

Science moves in weird ways. You can't argue from the fact that something
has not been done, that it can't. And you can't argue that because your
opponents are rich that they must be right. After all Galileo had the Pope
against him. 

 

Think rather what is NEW in what we are doing, and think why there was not
much momentum for client side certificates.  I explained that already a few
times. Client side certs that can only be used on one web site are NOT very
useful. They only become useful when useable globally. And they can only do
that with URIs. So the problem with client side certs was:

  - using distinguished names

  - lack of linked data

  

So we solve these problems the rest  is easy. There will then be an
incentive to improve the User Interface.





 

Try to go through Ryan's argument; he actually answered you logically. In
defending EV, he distinguished it from other server certs that *some*
browsers will not clearly characterize as having been MITM'ed. 

 

It may be useful to have browsers show that they are using proxy
certificates. If this is your UI improvement proposal it sounds ok, but it's
not a big deal. You can see why not a lot of time was invested in this.

 

Having thus acknowledged the world of SSL MITMing as a fundamental threat,
he then suggested: perhaps look for user id protocols OTHER than TLS client
authn. 

 

My argument is that it is not a fundamental threat. You make it sound like
TLS does not work. But in fact it works very well. If you are in control of
your OS and machine you will not be able to work through these proxies - ie:
you will be blocked from revealing information to a third party which is
what you want.

 

Whatever system you come up with will have this problem: if you need to use
someone else's network and they want to see what you are doing and
communicating they will be able to argue that they should not pass on the
information unless you give them the keys. Be that using TLS, XML encryption
or anything else.

 





He outlines how the world involving any cascade of SSL MITMing proxies
(needed for social/corporate "interests") interferes with end-end client
authn, precluding its use beyond interaction with the first proxy in the
sequence. He didn't say it, but this is not a real constrain if that proxy
is now a websso IDP, than translates the layer4 assertion into a layer 7
signed token that bypasses transport bridging.

 

Social Corporate interests are not necessarily evil. They are larger agents
and if they need to see what is going on their network, they will only be
able to do so if you accept to give them access to your machine. And this
will be so whatever technology you use.

 

So in fact the legal uses of TLS work correctly, the illegal uses of TLS
work correctly. There is a gray zone with corrupt CAs, but that will be
dealt with ietf DANE and DNS-SEC. 

 

 

Not really my words, and really not my argument. I just elicited the words
and arugment from folks that others here *should* find reputable.






 

From: Henry Story [mailto:henry.story@bblfish.net] 
Sent: Tuesday, March 08, 2011 12:03 AM
To: peter williams
Cc: public-xg-webid@w3.org
Subject: Re: report on EV and SSL MITM proxying

 

Some folks suggested that life would be all rosy, in webland, if the browser
displayed which client cert had been presented to a given website (per tab,
presumably). How come those over-complicating security designer types, just
don't do simple and obvious things, when its ALL so EASY if one just thinks
minimally and logically!

 

I have not seen an argument in what you have put forward that shows that
this is not an easy thing to do. The only arguments from browser vendors I
have heard, is that client certs are not widely used, and so it has not been
a priority for them.

 

 

 

Social Web Architect
http://bblfish.net/

 

Received on Wednesday, 9 March 2011 08:05:25 UTC