Re: WebID prehistory

On 3 Feb 2011, at 13:35, Peter Williams wrote:

> Two different issues:
>  
> 1. working with ldaps URIs, as webids. 
>  
> Yes this tends to be about cooperating with a lot of <1-million entry max servers. There is just a lot of them, as many as there are LANs, essentially. But its a sure way to get microsoft on board, here. They have lots to contribute, once they feel its viable and doable in a 12 month period.

There are a number of options there. 

A. One could use ldaps:// urls as well as a WebID in the SAN field

  A URI implies a global namespace. But how global can ldap as it is now, really be? How many ldap servers are globally accessible? Not many is my guess. Why? 

  1. Perhaps because every enterprise uses it's own attribute-value pair definitions. LDAP attribute value pairs are not globally agreed up, or if so only partially. (They are not URIs) That means that a client dereferencing an LDAP URI of some company would only be able to make guesses as to the meaning of the returned document. Not a good way to build interoperable software on a global scale. 
  2. Can an ldap entry refer to a user in another ldap database? Say a Microsoft person refer to an IBM employee? And with the relationship between them being clear (see point A.1. above) Without that one cannot have social web. 
  3. Perhaps for access control issues. Does LDAP have a good security model for accessing
pieces of the trees? If yes, have the rules ever been built to allow a user identified from some other unknown ldap database partial access?

 Answering these questions will help explain what the semantic web brings to the table (and also why the X500 enterprise was incomplete). The semweb essentially solves all of the above problems, including the issue of caching (done at the REST level). 

B. Work with LDAP at the Web layer

 That won't be much work. I doubt that many people interoperate with LDAP using LDAP tools. Most people interact with a web front end. Microsoft probably has some components to do that already. So in that case it is just a matter of making it easy to map those LDAP ids to http WebIDs, and work at the web layer. Microsoft could certainly do that in 12 month. (I am happy to help them out there). This type of mapping is done all the time in RDF land. See the great d2rq java tool, or the work done in the http://www.w3.org/2001/sw/rdb2rdf/ working group.
   This is another way of showing what the WebID protocol brings to the initial X500 vision. What it brings is what this type of mapping enables: essentially linked data.

>  
> 2. Understanding the 1988 edition Directory security model as prior art , smply to stock up on patent infrigement defenses.  Its good that its all 20+ years old, and so solid a reference (a UN body, called ISO,  recognized by national standards authorities).
>  
> Yes, this all ancitipated what never happened - the web-scale global directory. But,... now we do using more modern technology!

Exactly

>  
> The authenticated directory operation in 1998 had a validity model much like FOAF+SSL had -  in that the server receiving the peer entity authentication handshake would typically send the client cert in support, and the receiving server would then issue an callback operation to collect/verify that the cert was indeed in named directory entry, once located by an act of subject name de-referencing. Obviously, its critical to ensure the requesting entity is not being spoofed or misled about the agents authority to speak for that container, authoritatively. 

Would you like to write this all up on the wiki, so that we can refer people to it?  I think this could be a deliverable. I am thinking we could put this in a space where we do a series of comparison of WebID with other technologies that were very close to getting this right.

We just need to see where the best space to put this up would be. This would be a bit like a spec, in that we would then have to go over it and edit it as a group, but it won't have to be as tightly written as the spec itself.

The paper/note should be relatively short: 1 or max 2 pages. Perhaps we have a template for this hanging around? 

>  
> 
>  
> > From: henry.story@bblfish.net
> > Date: Thu, 3 Feb 2011 12:19:59 +0100
> > To: public-xg-webid@w3.org
> > Subject: WebID prehistory
> > 
> > Peter Williams has mentioned how close WebID is to what X500 wanted to be. Here is a recent intervention of his:
> > 
> > On 3 Feb 2011, at 05:26, Peter Williams wrote:
> > 
> > > In the original X.500 strong auth model, one could have a directory entry/container of name N any number of user certs, of any subject name. There was no relationship between the container name and the subject name in the cert (in the formality of the standard that is, though in practice there often was). Presence in the entry proved someone had expoited write-privs within the security policy, to publish it there. Having cited a cert bearing a subject name claim, the relying party only had to do one thing: confirm that the very same cert bytes, in canonical encoding, were present in the container located by the claimed subject name. The verifier thus ahd to "trust a name resolver", to correctly lcoate a container, given a claimed name. The resolver also had to trust the server, to be correctly enforcing its access control model. 
> > > 
> > > If the container had the name dc=auchentoshan, dc=cs, dc=ucl, dc=ac, dc=uk and a yellow pages entry had a name host=auchentoshan,o=UCL-CS, l=Internet, if the subject field in a cert presented to a server was the latter and the same cert was to be found in the entryd once the name resolver had done its location thing to retieve the entry and its various attributes, then the cert was "valid".
> > 
> > I think we should develop this a little by comparing it with what WebID does and how the semantic web helps solve the problem of the meaning of ldap directories, how it helps link across ldap directories, and so why WebID helps turn the X500 vision into a global one, not just one tied to a company... 
> > 
> > In another post I think I understood Peter to have written wrote that early SSL implementations would fetch the certificate from the X.500 directory. The client would not send it as it does now to the server. So putting the certificate in the client could be thought as a sort of caching mechanism.
> > 
> > It would help to have a clear write up of this, factually verified, with proper citations, because this would then help show how we are fulfilling the X500 dream, and help align the X500 people's intuitions with the semweb world.
> > 
> > Should I add this as an issue.
> > 
> > Henry
> > 
> > Social Web Architect
> > http://bblfish.net/
> > 
> > 

Social Web Architect
http://bblfish.net/

Received on Thursday, 3 February 2011 13:29:48 UTC