RE: Peter's wish protocol

Let me deal with an easy one: EE
 
An EE cert is an end-entity cert. Its a fundamental property of link chains of certificates, used in strong authentication procedures (such as those exploited by SSL handshake designers). Its a common abbreviation for folks who work with certificates.
 
It  contrasts with a non-EE cert :-) otherwise known as an authority. 
 
Much as here we want a formal object class (person :: foaf-agent) to be a defining property of a webid (becuase of the formal design-time relationships to be enforced by the cert ontology at execution time), so folks in X.500 1988 world did essentially the same thing. 
 
1) Only a entry with object class "CertificationAuthority" could hold a cert to be deigned as an authority. The name in the cert had to reference a container, one of whose component classes was CertificationAuthority. The semantics of authority were... act like third party (e.g. VeriSign/Symantec) and mint certs about others.
 
2) An entry with the object class "strongAuthenticationUser" held a cert to be deigned not-an-authority. The semantics of a "user" were...one was empowered to use crypto primitives (like signing, encryption...) to perform strong authentication procedures, just like those in https/SSL.
 
By inspecting the relationships of identifiers, certs bearing name identifiers, containers of certs (what we would call the PPD) and classes of facts stored in a PPD-like container, the verifier of a strong auth procedure garnered facts "about" a cert (or its subject, rather) that are not asserted within the cert value itself. This arrangement is VERY much like FOAF+SSL, and is the same essential theory connecting containers, references/names, lookups, and formal schemas that constrain "valid relationships" to be enforced by security modules.
 
Now, that world ALL CHANGEed in 1996ish, with the ISO decision to release X.509 certs from the directory, and let them roam free in who knows what security protocols designed by internet folks. Thus, an extension capability was added to cert types, and certain extensions types were standardzied to convey the same kind of information as was previous to be obtained by inspecting the containers and the X.500 graph (known as a DIB, if anyone cares about old crap).
 
One extension, mandatory in PKIX profile, is called basicConstraints - in which the [named] issuer of the cert declares to all users of the cert: the subject is an EE (leaf), or the subject is an authority (not a leaf). 
 
In link chains of certs (in PKIX land), authority certs link in some sequence, which ends with a EE cert. 
 
In PKIX design, a subject with an EE cert cannot mint further certs using its signing key, unless they are ephemeral and target only its peer. The subject is denied by design any "certificate signing" power. This is what essentially distinguishes its from authorities.
 
in SSL/TLS that confirms to the internet-standard-to-be, an verified must enforce the rules above, refusing to connect to any party that does not either assert a "well-formed cert chain" (as above), or, from an identifier, the verified cannot discover on its own a similalry well-formed chain.
 
There is a lot more theory. But, the above is a start. what matters is... its great that we are leveraging procedures already standardized - as that makes patents REALLY hard to get.
 
As we all know here, authority is in the mind of the beholder - and we are proposing the self-asserted authority is as legitimate as third-party asserted authority.
 
ONe thing folks are struggling with in PKIX land is : whether to ban the self-signed cert in "PKIX-complying" systems. FOAF+SSL relies on the world in which a given software agent can optin to being PKIX-complying, and opt out again (when using self-signed certs with non PKIX enforcement semantics).
 
The argument PKIX folks make : is that the very freedoms provided by the self-signed certs contaminates the assurance of the internet, which should only be PKIX conforming (or military spec equivalents). I can personally see what they are saying, but for obvious reasons, we just moved from technology to crypto-politics - about which there will surely be one or two opinions.
 
 
 

 
> From: henry.story@bblfish.net
> Date: Tue, 1 Feb 2011 23:06:04 +0100
> To: public-xg-webid@w3.org
> Subject: Peter's wish protocol
> 
> On 1 Feb 2011, at 20:27, Peter Williams wrote in a thread archived at
> http://www.w3.org/mid/SNT143-w44720C811FBEFF7E72DA3992E50@phx.gbl
> 
> > The cert is a way of "getting browsers to do the security primitive" called the SSL handshake. its nothing more. Arguably, the cert communicates the webid, and cert enrollment at least ties the webid URI to the public key, in a self-signed blob.
> 
> ( The certificates can be self signed or not. )
> 
> The public key passed in the certificate is of major importance, as it is that public key
> that the server will use to prove in the TLS connection that the client knows the corresponding key. 
> 
> The certificate then further claims that the owner/(knower?) of that public key has a global
> identifier called wid. The Relying party then does a dictionary lookup in the global distributed dictionary we know as the web on the meaning of wid, and finds that the meaning of that term is whoever is a knower of that key.
> 
> But come to think of it I see your point. The public key could also be fetched at the WebID profile, served over https in any number of formats, such as rdfa and it would work, and the client would never need to send the certificate to the server. 
> 
> > One NICE thing about having ClientHello communicate the webid is ...it DEPRIVES the wolrd of PKI the excuse to try yet again to sell client cert lifecycle management processes, forcing them to focus on the profile doc instead.
> > 
> > here is my current wish list for an skeleton ideal scheme (due PURELY to the discusions held here, which I find stimulating).
> 
> > 1. clientHello communicates webid claim
> 
> with the client_certificate_url extension? How much bandwith is really saved there when you
> have just a very minimal certificate go down the wire. Or rather how many packets are saved, as those are the basic units of measurement. This would help understand the importance of this.
> 
> > 2. EE cert for client auth is ephemerally minted and (self-)signed by browser, thereby authenticating clientHello and its webid claim
> 
> What is an EE cert?
> 
> Anyway all the client needs to do is sign the something with the private key of the
> certificate selected by the user. 
> 
> > 
> > 3. new "cert type" defined per the TLS spec with help from IETF, in which that ephemeral EE cert is NOT ASN.1 on the wire but an xmldsig-signed datum. Other certs in the SSL message's client cert chain (if any) retain their ASN.1 value, to bring valuable legacy interoperability to bear while ensuring ensuring we do not project legacy formats further.
> 
> So creating a new cert format type is not really that important is it, if you fetch it remotely? A foaf file publishing a public key, served over https at the WebID location is enough. 
> 
> Though I am very much in favor of certs being in XML when served by the client, if it can be shown that the space issues are not serious. (binary xml?)
> 
> > 4. client cert support in CGI and page javascript APIs support client certs in ASN.1 and xmldsig, to drive new generation of apps.
> 
> Ok so those are jobs for library writers. That will happen if there is a need. The immediate need is the Social Web ( see my "Philosophy and the Social Web" http://www.slideshare.net/bblfish/philosophy-and-the-social-web-5583083 to get a bit
> of an idea of some of the serious political, philosophical, and social forces that are moving us all to participate on this list)
> 
> That need cannot wait for browsers to be changed. It has to start now with what is available. And developers/companies won't do much with SSL or TLS unless there is a nicely written down standard for it that is endorsed. Mostly because the received opinion is that client certificates are not usuable - a received opinion that was born without taking into acount linked data) So it is a key requirement for this group to have a spec that can be worked on and made useable NOW by developers.
> 
> Your idea above sounds like nice optimisation tricks and improvements that can be added to future browsers. I think it would be worth investigating those as WebID 2.0, or something that can even be done in parallel with the minimal WebID protocol that we know as foaf+ssl.
> But the real need for WebID is to get the Social Web going. Without adoption of the minimal spec, the advanced specs will not go anywhere. I am for release early and often, and don't get too far ahead of the needs. But we being an incubator group I think we could have a WebID 2.0 protocol sketch like this, giving some longer term directions as to where IETF/W3C evolution could lead to from our experience with WebID 1.0. It's a question as to how much time it takes to work on.
> 
> Henry
> 
> 
> Social Web Architect
> http://bblfish.net/
> 
> 
 		 	   		  

Received on Tuesday, 1 February 2011 22:38:39 UTC