W3C home > Mailing lists > Public > public-xg-webid@w3.org > February 2011

RE: Peter's wish protocol

From: Peter Williams <home_pw@msn.com>
Date: Tue, 1 Feb 2011 15:51:50 -0800
Message-ID: <SNT143-w73CAAF1F65DE31405370192E50@phx.gbl>
To: <henry.story@bblfish.net>, <public-xg-webid@w3.org>

Lets deal with another technical issue, so we can use some pro terminology as and when necessary. (note, I speed type, merely typing out what I verbalize; dont consider what I utter here as "writing" to be referenced).
 
Lets consider: The public key passed in the certificate is of major importance, as it is that public key that the server will use to prove in the TLS connection that the client knows the corresponding key. 

The above may or may not be true. There is a better perspective - that gets to the heart of the security requirement that is really at stake.
 
The SSL handshake is a sequence of messages, to be considered an atomic sequence. It provides theSEF of "peer entity authentication" (and nothing more). It only provides it when both parties (in the 2 party variety of SSL) complete their state machines having validated the other party has received the the finished message (see the SSL3 spec).
 
Everything else about SSL is about security associations - in short, using keys derived from the handshake to encrypt packets/frames of various types. (this is not to say, that the design of the association proeprties is easy, or secure. (Lots of threats to crypto derive from the poor association design (as folks in the IPsec world found out, in 2006).)
 
Per good security design for peer entity auth, the handshake authenticates the entities, and agrees the master key - meeting the requirements of peer entity authentication (which include requirements for freshness, spoofing countermeasures etc). In the case of "mutual" authentication, both client party and server party have strong names for the other. The handshake has certain properties, in which cited keys becomes used keys, about which each party can confirm receipt by the other.
 
Where one happens to perform an nth handshake (n>1), the resulting security association can (if so setup) add confidentiality services to the transfer of the handshake messages in the n+1th instance. 
 
As in foaf.me design today, one gets to profile sequences of handshakes as a server: handshake n authenticates the server to the anonymous client, and handshake n+1 "upgrades" this state of security to "mutual" authentication (conducted under the cover of the association formed by handshake n). This SEQUENCE of 2 shakes is a custom security primitive, building upon the base security primitive. In foaf.me design, the second handshake builds upon the first, allowing one to infer 1 => 2. Since 2 => client cert name && 1 => IP addr, !2 => !1 => not cert => not IP.
 
As a result of such handshaking, a server-side socket may indicate to a socket consumer: the strongly authenticated name of the peer (and inferences about IP address of the client). In the world of CGI, this is the "ascii-armored" client cert value itself, with no supporting chain of authority used by the SEF enforcement module within the server's security model, when computing the inferences given above. Its just a nice blob, with a name inside. In many CGI libraries, the name is also parsed out and provided to the CGI user in addition.
 
As we know, thanks to some horrid politics a decade ago, folks still have the privilege in commodity systems at this CGI API of not only receiving the cert, but may receive a specifically self-signed cert. The ability to receive this is because the security enforcement controls for peer entity authentication can (still) be set to allow "peer trust" - i.e. optout of PKIX-like cert chaining logics, should one wish
 
The whole point of the opt out from PKIX-like mandatory authority chaining regime policies was to allow n interesting peer trust models to bloom, of which FOAF+SSL is one. There are several others, very, very well supported in Microsoft Windows land.
 
Now there is an implied statement being asserted by the socket provider to the socket user (the CGI consumer): that the ssl handshake was valid - and there was peer entity authentication, optionally at mutual level.
 
Assuming one for a given socket consumer set the peer-trust policy, this is all one gets. But, what a wonder it is. In the case of the RSA ciphersuites, indicates that the clientHello message was authenticated to the same entity that the cert's public key is bound. Thus, the signature that protects the ssl handshake in order to deliver the assurance of the peer entity auth service happesn to also origin-authenticate (a different SEF) the contents of the ClientHello - assuming these values are revealed at the CGI API.
 
Of course they are not, but dont worry! Neither was the client cert revealed in the CGI/servlet API either - still folks insisted they wanted to add value by taking control of "peer trust" - and performing some or other trust model in application code designed by the programmer - not the system vendor.
 
This is course is exactly what you and Bruno did, using the Apache servlet API.
 
So what really matters, I implore, is focus of the design here is on the properties of the handshake ( a well defined SEF), not the cert. The cert is just one side-effect output thereof - upon which one then can do exactly what you did - fashion a means of searching the graph of profiles to chain together trust points that  talked to each other "about" the subjects of certs (URIs).
 
 
 
 
 
 

 
> From: henry.story@bblfish.net
> Date: Tue, 1 Feb 2011 23:06:04 +0100
> To: public-xg-webid@w3.org
> Subject: Peter's wish protocol
> 
> On 1 Feb 2011, at 20:27, Peter Williams wrote in a thread archived at
> http://www.w3.org/mid/SNT143-w44720C811FBEFF7E72DA3992E50@phx.gbl
> 
> > The cert is a way of "getting browsers to do the security primitive" called the SSL handshake. its nothing more. Arguably, the cert communicates the webid, and cert enrollment at least ties the webid URI to the public key, in a self-signed blob.
> 
> ( The certificates can be self signed or not. )
> 
> The public key passed in the certificate is of major importance, as it is that public key
> that the server will use to prove in the TLS connection that the client knows the corresponding key. 
> 
> The certificate then further claims that the owner/(knower?) of that public key has a global
> identifier called wid. The Relying party then does a dictionary lookup in the global distributed dictionary we know as the web on the meaning of wid, and finds that the meaning of that term is whoever is a knower of that key.
> 
> But come to think of it I see your point. The public key could also be fetched at the WebID profile, served over https in any number of formats, such as rdfa and it would work, and the client would never need to send the certificate to the server. 
> 
> > One NICE thing about having ClientHello communicate the webid is ...it DEPRIVES the wolrd of PKI the excuse to try yet again to sell client cert lifecycle management processes, forcing them to focus on the profile doc instead.
> > 
> > here is my current wish list for an skeleton ideal scheme (due PURELY to the discusions held here, which I find stimulating).
> 
> > 1. clientHello communicates webid claim
> 
> with the client_certificate_url extension? How much bandwith is really saved there when you
> have just a very minimal certificate go down the wire. Or rather how many packets are saved, as those are the basic units of measurement. This would help understand the importance of this.
> 
> > 2. EE cert for client auth is ephemerally minted and (self-)signed by browser, thereby authenticating clientHello and its webid claim
> 
> What is an EE cert?
> 
> Anyway all the client needs to do is sign the something with the private key of the
> certificate selected by the user. 
> 
> > 
> > 3. new "cert type" defined per the TLS spec with help from IETF, in which that ephemeral EE cert is NOT ASN.1 on the wire but an xmldsig-signed datum. Other certs in the SSL message's client cert chain (if any) retain their ASN.1 value, to bring valuable legacy interoperability to bear while ensuring ensuring we do not project legacy formats further.
> 
> So creating a new cert format type is not really that important is it, if you fetch it remotely? A foaf file publishing a public key, served over https at the WebID location is enough. 
> 
> Though I am very much in favor of certs being in XML when served by the client, if it can be shown that the space issues are not serious. (binary xml?)
> 
> > 4. client cert support in CGI and page javascript APIs support client certs in ASN.1 and xmldsig, to drive new generation of apps.
> 
> Ok so those are jobs for library writers. That will happen if there is a need. The immediate need is the Social Web ( see my "Philosophy and the Social Web" http://www.slideshare.net/bblfish/philosophy-and-the-social-web-5583083 to get a bit
> of an idea of some of the serious political, philosophical, and social forces that are moving us all to participate on this list)
> 
> That need cannot wait for browsers to be changed. It has to start now with what is available. And developers/companies won't do much with SSL or TLS unless there is a nicely written down standard for it that is endorsed. Mostly because the received opinion is that client certificates are not usuable - a received opinion that was born without taking into acount linked data) So it is a key requirement for this group to have a spec that can be worked on and made useable NOW by developers.
> 
> Your idea above sounds like nice optimisation tricks and improvements that can be added to future browsers. I think it would be worth investigating those as WebID 2.0, or something that can even be done in parallel with the minimal WebID protocol that we know as foaf+ssl.
> But the real need for WebID is to get the Social Web going. Without adoption of the minimal spec, the advanced specs will not go anywhere. I am for release early and often, and don't get too far ahead of the needs. But we being an incubator group I think we could have a WebID 2.0 protocol sketch like this, giving some longer term directions as to where IETF/W3C evolution could lead to from our experience with WebID 1.0. It's a question as to how much time it takes to work on.
> 
> Henry
> 
> 
> Social Web Architect
> http://bblfish.net/
> 
> 
 		 	   		  
Received on Tuesday, 1 February 2011 23:52:46 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:22 UTC