Re: WebID prehistory

On 3 Feb 2011, at 17:13, Peter Williams wrote:

> Concerning ldap visibility and scope, we should ask: are all https endpoints publicly accesible? No. The vast majority of wifi routers in homes are http endpoints, but the endpoint is only exposed on the LAN. The same is true for most if not all the modems, with their administration http endpoints.

agree. But the issue is not whether you can make a system designed to be global also function in a restricted way, but if you can make a system designed to work in a restricted way to also work in a global environment.

>  We have to be fair on ldaps.

This should not be an issue of being fair or not. It is an architectural issue: each system comes with its strength and weaknesses. LDAP and X500 have had a very successful history. But we need to show what is being brought to the table by WebID that is new and important, that solves a problem worth solving that cannot be solved with previous technologies. Inevitably we are going to have to show lacunae in what came before. If we can show that those were not part of the initial design decision, or if the solutions they attempted were not as successful as initially hoped, then I don't see why this should be taken badly. They have had 30 years of success.

> Some of us cognizant of major shifts in the world of the cloud, as traditionally enterprise-centric ldap endpoints are extended beyond the LAN onto the subnets supporting the firm's cloud presence - turning the intranet ldap endpoints into extranet endpoints. For folks familar with SAML and webSSO, folks will know that ldap proxies exist to support public SAML endpoints, allowing the websso flow to leverage the semi-hidden directories as an attribute store and authentication authority. This extranet angle enable one to project private federations of ldap namespaces across the internet. This topic generalises as a wider issue, as below.

Ok, so data stored in ldap can be made globally accessible by being tied into a communication medium (the XML SAML format) that was designed to solve the problem in a global space.  So what functions is ldap+SAML. As far as everybody else is concerned, it's just SAML - they may not even know if ldap or sql is behind it.

This can of course be done for any data format. D2RQ enables relational databases to publish their data in the semantic web space. The rdb2rdf working group I mentioned in my previous mail is working on standardising this process. 

The problem is that dereferencing ldap URLs bypasses that indirection. It moves one straight to the ldap representational layer, which I think should be easy to have been found lacking, for the reasons I gave in my previous mail. 

Note that the same issue would arise if you put SQL databases up on the web. They would be too complicated to understand or use by anyone, given that the schemas would be completely local to the db owner.

> We also have a "scoping" decision to take: just like http/https is defined for use in intranets, is webid protocol to be usable in an intranet setting, using private profiles that are NEVER to be exposed to the web?

Of course, it should work there too, as specced out now.

You will need to teach your users when they are in the intranet and when they are out, and so you will loose in ease of use and in the other advantages that come from tying yourself into a global information space.

In my view the internet/intranet/extranet distinction will fade away with time. WebID makes it no longer necessary. Intranet/extranet form just very coarse grained access control. Access control on the level of a company. Access control at the (fire)walls of the company to be precise. That is exactly what led to wikileaks btw. 500 million to 1 million people having access to sensitive documents because once your inside the wall your OK, is not that good of a security model. 

 It is much better to have resource level access control. This is what WebID enables. So why really should one still need the intranet/extranet distinction? It just makes things unwieldy. 

>  I find that question philosophical, as I include intranets in "the web" - as my web includes my private compartments, and I expect yours  to exist too (and yours, and ...). But then... I'm a security type: I think in terms of compartments. Others may want to say (in traditional W3C tone) there does exist a nether world (of un-web behaviour) that is slightly second class, called intranets and cloud hosted extranets, and those enterprise who merely span over the web/internet without "contributing".

See my point above.
In any case at sun I had, to make this point developed an internal WebID, which I then linked to from my public profile

:me    a foaf:Person;
       = <http://sixiron.sfbay.sun.com:8080/FoafServer/services/people/155492#HS>;

It would have been better to do the opposite come to think of it, to link from the internal profile to the external one only.

Sun engineers that had internal access would be able to follow the link, others not.

>  Perhaps have a look at the world of certs to help us to see how certs (a security construct) address compartments in the unweb/web of today: If you look at the URI pointers to CRLs that are stored within certs issued by the typical LAN CA, there are probably 3: http://intranet-netbios-name/foo.crl http:/public.com/foo.crl file://c:\\domain\\certSvr\foo.crl. I.e. multiple URI, each aligned with a naming practice for scheme and authority that cooperate with the "visibility": of the compartment, and the certs use pattern.

Yes, intranet/extranet's bring complexities of their own that have complex and tedious solutions. owl:sameAs here helps. In fact since every WebID is it's own CRL, you can see how we end up creating linked CRLs via owl:sameAs.

(file url's are only in a very limited way URLs. They are certainly not URLs if you don't fill in the domain name. I would not bother about those)

>  
> So perhaps  we incidentially identified an issue: intranet naming (in webids) and internet naming (in webids). Does this go into the multiple URI issue bucket?

Yes. It's a multiple URI issue we can solve easily, and another reason we're grateful that one can add multiple URLs to the SAN. You are right we should add that as an argument in favour of multiple URLs in a SAN.

>  
> > > The authenticated directory operation in 1998 had a validity model much like FOAF+SSL had - in that the server receiving the peer entity authentication handshake would typically send the client cert in support, and the receiving server would then issue an callback operation to collect/verify that the cert was indeed in named directory entry, once located by an act of subject name de-referencing. Obviously, its critical to ensure the requesting entity is not being spoofed or misled about the agents authority to speak for that container, authoritatively. 
> > 
> > Would you like to write this all up on the wiki, so that we can refer people to it? I think this could be a deliverable. I am thinking we could put this in a space where we do a series of comparison of WebID with other technologies that were very close to getting this right.
> > 
> > We just need to see where the best space to put this up would be. This would be a bit like a spec, in that we would then have to go over it and edit it as a group, but it won't have to be as tightly written as the spec itself.
> > 
> > The paper/note should be relatively short: 1 or max 2 pages. Perhaps we have a template for this hanging around? 
> 
> thats a good idea, I need a volunteer even at stage 1 (not Henry) - a co-author. Person should know have standard familiarity with the nature of ldap, but should be mostly rigorous as a academic writer. This will complement me. I have the know-how in the theory behind ldap (which we can call X.500), I just dont have any skills in writing in the manner of text books. (What I once had Ive lost, having written a million emails since then.)

Ok, very good. We just need to find a template and a volunteer to put this out nicely. 

>  
> Im happy to take the 3 notes already written and cast them into the form of writing- proably keeping about 50% of the sentences. Then I want someone to go and simply do a (vicious) rewrite, offline. Then ill go back and ensure any lost know-how is returned. We can post the page, and the wider community can then take it over , and do with it as they want when re-editing.

There is bound to be some good templates for this. I think W3C people will know where to look. 

Henry

Social Web Architect
http://bblfish.net/

Received on Thursday, 3 February 2011 17:13:02 UTC