- From: Henry Story <henry.story@bblfish.net>
- Date: Mon, 8 Aug 2011 00:43:53 +0200
- To: Dave Raggett <dsr@w3.org>
- Cc: Anders Rundgren <anders.rundgren@telia.com>, "public-identity@w3.org" <public-identity@w3.org>
On 7 Aug 2011, at 21:47, Dave Raggett wrote: > On 06/08/11 10:44, Henry Story wrote: >> On 6 Aug 2011, at 10:39, Anders Rundgren wrote: >> >>> I believe we have entered another phase of web development were alternative >>> routes to standardization are becoming more common due to the slowness and >>> political difficulties associated with SDO processes. >>> >>> That just about everybody is connected to the Internet and can update >>> their SW platform in minutes makes the new ecosystem highly dynamic. >>> >>> It isn't even necessary getting everything completely right from scratch. >>> My experiences @ TrustedComputingGroup indicates that the traditional way >>> of developing stuff for the masses is simply put contra-productive. >>> >>> IMO "all bets are off" regarding the final solution for secure and >>> ubiquitous access to the Internet. It presumably lies in the hands of >>> browser vendors and service providers. >> Yes, but if you look at it you will see that the problem they will all face is the problem of reference, not the problem of cryptography. It is the problem of trust of CAs that will always come back. So unless one works on that - a semantic web task - the issues will always come back. People are always mesmerised by syntax, thinking it is a syntactic problem they are confronting, when in fact it is at a different layer: distributed trust semantics. > > I agree with the issue of trust. CA's don't really reflect the trust models we have for people and organisations. yes. CAs are currently forced to be rigidly selected group because their public key has to appear in every browser for it to be worth servers having a certificate signed by them. The rigidity is then increased in that their guarantee is all or nothing. This works just about for authenticating servers - though something like the IETF DNSSEC based DANE will be a big improvement. But with client authentication it is hopeless. There are too many agents in the world to authenticated and identify for a limited number of CAs to be able to take on the role. What would they certify? Someone's name? Who cares. It's the social network that counts. So it is not just the trust model that does not fit, it is just going to be impossible to do it for agents when we move from the relatively small number of servers that have TLS (and those are still very small), to needing to authenticate people and organisations. Luckily we can use the same technology and tie it into the relevant trust models (which is what http://webid.info/ is about ) > > For businesses, I would like to see credentials provided by national or regional bodies such as Companies House in the UK, as well as organizations charged with responsibilities for oversight of particular industry segments. For individuals, you would have government issued credentials, as well as scoped credentials such as for current membership as a student at a given university. yes. This can be done easily with linked data. It's just the ontology and examples that need to be worked on. My thought is that it is easiest to start by certifying one's friends - because you can do that without needing anyone's permission. Then the next step is having WebID published on university pages. Institutional WebIds will be more valuable for work and research than more generalist identity sites. It will then be just a step to have those universities point to the best projects they have, and make those the official ones. And a bit later these could be tied into the Web Profile describing the university. This would then end up getting pointed to by some government institution. So one can see how a bottom up standard can be built where everyone can participate in small ways and the network effect can be used to create a metastability. Some can participate small, others can do it in bigger ways of course. Hopefully we can get more social networks to use linked data to connect these island social networks together, the way the web could link up all the islands of documents. What http://webid.info/ offers here is just to tie the user to the data in such a way that he can get access to things because of the quality of his network, which he would not get were he not so related, spurring a quest for people to have quality relations. (Privacy is then deal with using access control) As these bottom up links are made at the individual then the institutional level, it will then be possible for international university links to be made, so that a robot from anywhere in the world could end up finding out that something was a university profile, that a URI identified a researcher, and perhaps even what international projects (s)he was participating in and what his interests were. > Strong credentials are needed for privacy friendly authentication where the relying party is given a proof that the authenticated party has certain attributes in a strong credential, but via a process satisfying the principle of minimal disclosure of personal information for the task in hand. I think a lot of this can be done by linking between sites, and access control by those sites on what attributes someone can see. Sometimes signed statements will also be needed, though those tend to be more cumbersome to work with. > I plan to work on extending webkit and Mozilla to support this, as working code is always more compelling than just talk. However, to realize the trust models we need to discuss what is needed to support a culture of credentials that match up to real world requirements. what are you planning to do there? Henry > > -- > Dave Raggett<dsr@w3.org> http://www.w3.org/People/Raggett > Social Web Architect http://bblfish.net/
Received on Sunday, 7 August 2011 22:44:34 UTC