W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

RE: The Science of Insecurity

From: Peter Williams <home_pw@msn.com>
Date: Thu, 29 Dec 2011 08:04:14 -0800
Message-ID: <SNT143-W22401D271A4292D3D6951492AD0@phx.gbl>
To: "public-xg-webid@w3.org" <public-xg-webid@w3.org>

traditionally, crypto insecurity has come about through 2 means: lack of precision, and human nature. 99% of the insecurity is due to the latter. of the former, the particular nature of the insecurity problem has often been exploited, intending that errors become an oracle. If you cannot break a cipher, break its key gen. If you cannot dupe the user into using a broken key gen, break the password gaurding the resultant key. etc etc. This is normal strategic police work (and national security work). You engineer false assurance into the physical implementation, while indoctrinating folks to think at a logical level. Thinking logically is the flaw, and lots of effort goes into indoctrinating a particular logical doctrine about crypto engineering (for civil systems). this is where crypto gets interesting, as it combines with counter-intelligence - the planting of insecurity in security. Its the semantic trojan horse. Now, CCITT's X.509 doesnt define signature and public key encodings, if you look (very) carefully. They are macros, to be specified by "vendors". There have been numerous attempts to formalize suitable encodings, padding rules, and seriealizations, many of which have been shown to have fundamental crypto-level vulnerabilities - due to the nature of the particular cipher math. RSA is not a panacea (in contrast to how its presented in the academic text book for undergrads). its very math induces vulnerabilities when placed in certain contexts : including those were a protocol can turn RSA into an oracle, that reveals facts about a cipher instance, a key, or a signaure, or a key transport block. Much like in 1937-1942 against purple, one works diligently to induce release 1 bit at a time of entropy, while guessing. Certain laws of guessing apply to crypto, involving the well known non-determistic but probabilistic complexity classes. Even a civil engineer learns about monte carlo simulation and fast fourier on a computer (not becuase its relevant to designing architected structures, but because it trains the mind of the engineer concerned with failure analysis). If you look at the root keys of the better CAs, they are JUST NOT ONLINE. There is NO protocol and there is no embodiment; and the crypto engineering is about showing that there CAN be no protocol. It has to seize up, the moment the self-reflecting device recognizes that its not in manual mode. The assumption is that ANY online or protocolar presence of that keypair or that implementation of the math, or that encoder of bits, make for vulnerability, and thus systemic compromise (given the role of those root keys in control theory). Furthermo the certs dont exist, in the normal sense. The future root certs that one has prepared to replace the ones you can see are the most important. One has use the insecure form to prepare the ground for the secure form. As soon as its revealed, it has to prepare the ground for its secure successor - which is only secure while unused. this is why in higher level assurance theory, the root keys dont even exist, normally. They are subject to shamir key splitting, using n of m regimes. Of course, one can argue that the situation described above now applies to the key splits, which is true, since they are "representations" and THUS vulnerable - by fact of existance in the physical world. At this point, the philosophy becomes one of difficult, economic cost and value - subjective parameters. The deterrent has to be founded in the nature of human existence, inducing co-dependency where possible. If you look at the how the the public CA network was conceved (contrasting with the PKI space),  it leveraged co-dependency. To compromise it (leveraging all of the above), hurts more than it benefits. One has understood the power equation in social structures, and induce self-interest to produce an auto-block against exploitation of the limits of the technology. You have to create a "too big to fail" situation, as a professional crypto engineer.       
 > From: henry.story@bblfish.net
> Date: Thu, 29 Dec 2011 16:00:12 +0100
> To: public-xg-webid@w3.org
> Subject: The Science of Insecurity
> 
> Here is a very interesting talk given at the 28c3 in Berlin today on how to analyse protocols for insecurity, using language complexity and the turing halting problem as a basic measure to delimit what cannot be resolved.
> 
>    http://www.youtube.com/watch?v=3kEfedtQVOY
> 
> So it would be an interesting work to look at the components we are using to see how these fit into this.
> 
> So we could look at the serialisations we are using
>  
>   - RDF/XML 
>   - Turtle
>   - NTriples (ok, this one is clearly parseable with regexps)
>   - RDFa
> 
> Then to look at the underlying protocols:
> 
>   - TLS and X509
>   - HTTP
> 
> From what I understand it looks like there are a couple of issues with X509 ASN.1 encodings I think due to the way numbers are encoded there. And HTTP has the Content-Length field. 
>   
>   Henry
> 
> Social Web Architect
> http://bblfish.net/
> 
> 
 		 	   		  
Received on Thursday, 29 December 2011 16:04:47 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 29 December 2011 16:04:48 GMT