W3C home > Mailing lists > Public > public-xg-webid@w3.org > May 2011

RE: webid trust model, one or multiple?

From: peter williams <home_pw@msn.com>
Date: Tue, 3 May 2011 09:18:48 -0700
Message-ID: <SNT143-ds1995762B5DB1FCE4736A50929E0@phx.gbl>
To: "'Kingsley Idehen'" <kidehen@openlinksw.com>
CC: <public-xg-webid@w3.org>
Javascript implemented ssl is the tweak. Connectionless SSL messaging over
http post bearers (not TCP/UDP) is another, avoiding the wars over

Cryptopolitics has put static rules around https in browsers and servers,
that make it hard to evolve to the agent world we need. The javascript
breaks us out from the box, long ago locked by the culture of restricting

There should have been a 1000 vendors adding custom protocols to SSL by now
(since that was the intent). This would have allowed https to be the
framework that openid re-created (tunneling AX alongside its session layer
protocol, leveraging the common derived keying, for example).

Another tweak may be to not worry about all the SSL modes perhaps enabling
SSL in HTTP posts bearer to focus only on handshaking (vs data transfer) -
mainly for authentication and session management. One leaves app data
transfer (encryption etc) to the browser platform, using existing mature
sockets, channels, etc.

Now we can start to really do secured restful services, because the SSL
security layer is providing the crypto-session support in a manner that is
tune up for that view of how to orchestrate the web. We are not  forced to
keep retrofitting the https concept, conceived for a web generation now 10
years out of date.

-----Original Message-----
From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org]
On Behalf Of Kingsley Idehen
Sent: Tuesday, May 03, 2011 6:09 AM
To: peter williams
Cc: 'Melvin Carvalho'; public-xg-webid@w3.org
Subject: Re: webid trust model, one or multiple?

On 5/2/11 5:34 PM, peter williams wrote:
> So where do we fit into the trust space?
> Lets assume we are trying to fit into both legacy and future - which 
> the right balance. Assume we are not ultra-religious, about proper
> Assume utility is most important.
> We know even by openid 2's appearance, that the live journal notion 
> (openid
> 1) had fallen by the wayside. The FOAF basis of openid 1 was even 
> further back in time, and even more irrelevant. The netscape practice 
> from 1996, before ldap era at Netscape, of having personal home pages 
> on http://netscape.com/~jeffw died a very, very long time ago.
> We know that the core political agreement in openid space - between 
> openid and XRI - had two goals BOTH of which fell by the wayside post
> First, the use of an XRI record as user-centric intermediary between 
> IDP(s) and consumer sites died. Second, also died the use of a 
> hierarchy of signed XRD files for multiple namespaces - which competed 
> with a hierarchy of signed RR files known as DNSsec (preferred by US 
> govt, and which ties to be fair into IPv6 and IPsec). The latter -- in 
> the XRI/XRD case - allowed for poly-archical relationship models - 
> competing with triples and RDF and Sparql (in many ways). The well 
> known war of URI vs XRI was actually a side-show.
> Anyways, ALL of that stuff above has fallen by the wayside. None of it 
> has relevance. Henry was wrong to state in the paper that openid 
> requires a user to type a URI (whereas webid doesn't). Openid in 
> practice has not required typing URIs in years, and the number of 
> folks using that legacy mode is next to zero. This compares with a 
> billion Google users (using nascar-mode
> openid2.) Microsoft's ACS bridge doesn't even bother enabling legacy 
> URI interoperability (so it wont talk to the wordpress IDPs, or at 
> least its hard to configure in the default admin UI).
> Now, when I used to talk to John Bradley who is a fair minded 
> engineering with solid protocol, identity, and trust [framework] 
> ideas, I came away with the impression that he and the crew felt that 
> had tried as hard as anyone could, to deliver a user-centric world: as did
the XRI folks, in support.
> But, there was just no demand from the public. It was also 5-10 years 
> too early, tech-wise. AS we all know, sometimes a tweak makes all the 
> difference.

Luck and timing open windows for tweaks.

5-10 years ago there was a clear sense of how data access by reference could
go InterWeb scale via URIs. Personally, I am not convinced a majority of
folks actually understand the implications of the URI abstraction in this
regard. Misconception that a Syntactic lingua franca pegged to a specific
format remains my acid test for the aforementioned confusion.

Expanding what programmers have done for eons on local computers re. 
de-reference (indirection) and address-of operators is the key to solving
the problem. Interchange formats are only loosely bound to this system with
actual representation formats being negotiable. These solid principles
reflected in the URI abstraction are the true ingenuity that belies the Web.
The sooner folks understand this the better for everyone. Trouble is, folks
have been discouraged from learning about pointers and linked data
structures for many years; especially following initial failure of object
oriented technology (languages, databases, and middleware e.g. CORBA) pre
emergence of today's ubiquitous InterWeb.

> This is why Im wary of three things we keep alluding to: HTML5 is some 
> special kind of winner (to compete with OAUTH), DANE/DNSsec suddently 
> does correctly what signed XRD did not (for SSL server certs 
> counter-signing, at least), users with browsers will be controlling 
> very trusted agents in the cloud - that then orchestrate multi-agent 
> flows (for such as the photo printing use case); and,  then, in a 
> world of poorly assured endpoints with self-asserted identifies, some 
> magical reputation service will evolve (Nasdaq), based on web caching 
> and triple crawling - doing facebook-style be-friending, be-liking, and
be-dumping when you break a rule.

There needs to be a model (mental and solution implementation) inversion. It
might just be that Sony security breaches could help people ask the
following questions when interacting with input capture forms presented by a
Web Site:

1. Why should I ever need to type in my Credit Card Number?
2. Why should I ever need to give away my Mobile or Home telephone number?
3. Why should I ever need to give away my email address?
4. Why should I ever need to give away my Social Security Number?
5. Why should I ever need to give away my Passport Number?
6. Why should I ever need to give away my Birthday?
7. Why should I ever need to give away my Home Address?

Security folks (those who truly understand the subject matter of
privacy) have known for a long time that Data Access by Reference is the key
here. It's also why WebID, once properly understood via its Linked Data
(format agnostic via URIs) foundation, leads to a eureka moment. 
Basically, the revelation has more to do with the ability to build and
exploit Linked Data Structures at InterWeb scales via de-reference
(indirection) and address-of operator pattern exploitation.

> Beyond an authorization logic for attributes and statements, Im 
> looking for our trust claim. As it stands, I see folks mixing topics. 
> Just believe, and it will come out in the wash. No it wont. It just 
> makes for crappy crypto, that no one with an auditor will adopt.
> My gut tells me that we want a query server (ideally sparql) that can 
> simply compute trust chains - the sequence of foaf cards that link webid X
and Y.

Yes, but SPARQL is an implementation detail. You just want a service than
can compute trust chains. Same with FOAF, its just an implementation detail.

> That query service will itself be a webid-powered endpoint.


> It holds the
> cache of my reliances on foaf cards, and it computes my particular 
> closure (and yours, and Henries...).


> Joe's and Kingsley's sparql services are close here. We just need the 
> queries, and the multi-tenancy.

You just need the service published in our case :-)

> Then we stop. We leave the rest of markets, for now.


>   We let OTHERS add
> criteria and compute metrics, that judge whether one closure is better 
> than another. One group that can do this is the federated social web 
> folks of course, "adding" value to webid. But, so can n other groups, 
> that focus on other criteria that the social web cares little about.

And its the only way it can work. Disparity becomes the basis for additional
security via domain specific rules and policy graphs. The metrics in the
Accounts dept. cannot apply to the Marketing dept., for instance. What's
good for Facebook cannot be good for LinkedIn or the rest of the world.

The hard part seems to be fundamental acceptance that loose coupling is good

> -----Original Message-----
> From: public-xg-webid-request@w3.org
> On Behalf Of Melvin Carvalho
> Sent: Monday, May 02, 2011 11:18 AM
> To: Peter Williams
> Cc: Kingsley Idehen; public-xg-webid@w3.org
> Subject: Re: webid trust model, one or multiple?
> On 2 May 2011 19:10, Peter Williams<home_pw@msn.com>  wrote:
>> But society thinks it has special rights to define the privacy space. Us
> has decided that google et al should enforce it (whatever it is) on your
> behalf. Those who consume websso profiles are governed by google,
> use and reuse.
>> The single most important thing trust thing we have to address is ensure
> multiple idps support your access to sites - and they do not know of each
> others existence.
> Yeah it's kind of a shame that hosting your own OpenID become less of a
> focus.  In fact I think live journal who invented OpenID as a system to
> federate ID probably dont have access to most OpenID relying parties these
> days.
> The idea of giving choice in trust, is hopefully similar to what blogs did
> to traditional media.  Giving more choice to the end user in terms of who
> they want to get their information from.
> I think it's a sign of going mainstream when the president of the united
> states recommended broadening your horizens by putting down a news paper
> reading some blogs from time to time.
> It's hopeful that Identity, and WebID in particular, can help bring about
> that kind of choice and put users more in control of how they use the Web.
>> This (re) balances the power.
>> This is where openid ultimately failed. Let's see how webid does.
>> On May 2, 2011, at 5:31 AM, Kingsley Idehen<kidehen@openlinksw.com>
> wrote:
>>> On 5/2/11 7:44 AM, Melvin Carvalho wrote:
>>>> On 1 May 2011 20:37, peter williams<home_pw@msn.com>   wrote:
>>>>> Is there one webid trust model, or are there to be multiple -
>>>>> because the IX about standardizing "a framework" for trust
>>>>> overlays? If it's a framework, I see value in using logical
>>>>> description "enabling" trust metrics, generically. These can drive
>>>>> link chain discovery, as usual. It's criteria based search.
>>>>> Im trying to decide where to spend my time in the next three
>>>>> months. There is no point me being involved in something I don't
>>>>> believe will ever work (standardize a single trust metric). I might
>>>>> as well get out the way, if this is the group's mission.
>>>>> If it helps motivate the decision, a realworld user story of
>>>>> handling macro-trust issues - at national scale - may be applicable.
>>>>> There is just no way I can impose a trust metric on my very local,
>>>>> de-centralized customer base - as they network using the social
>>>>> web. They will quickly slap me down for even trying, let alone
>>>>> agree with any given proposal. They SEEK local variance in trust
>>>>> etc. It's what distinguishes their value, in the subtle "business
>>>>> social networking" scene found in selling real-estate to migratory
>>>>> populations, or as folks change lifestyle with age, income brackets,
> etc.
>>>>> The that scene, one sells trust in "gated communities" to one
>>>>> person, and one sells "iron bars on the windows" to another. Some
>>>>> communities measure trust in the absence of broken cars in the
>>>>> street, or absence of side-walks in country streets; and the
>>>>> realtor will project that value system. Trust, safety, confidence,
>>>>> and assurance are all variant terms, that get bandied around.
>>>>> Others communities have more divisive trust measures, often
>>>>> obliquely stated or enforced. Somehow the independent realtor as
>>>>> trusted agent has to mediate even these issues (which obviously
> requires  ALOT of social finesse).
>>>> I think this paper is an excellent model
>>>> http://www.cdm.lcs.mit.edu/ftp/lmui/computational%20models%20of%20tr
>>>> ust%20and%20reputation.pdf
>>>> It basically says there's an element of trust that is subjective and
>>>> an element that is observed
>>>> Trust is per individual and per group
>>>> Observed trust can be direct through interaction or observed, or
>>>> indirect through reputation of a individual or group, either prior
>>>> or propagated
>>>> It's a relatively complex model, but then trust is a hard thing to
>>>> model and can get very complex.
>>>> One reason I'm excited about WebID is that it's possible (longer
>>>> term) to model complex concepts such as trust as data and ontologies
>>>> develop
>>> When all is said an done, this is basically about the power of data
> access by reference combined with logic based schemas. As I state
> repeatedly, this is the "holy grail" for those who've grappled with these
> matters long before the emergence of today's ubiquitous InterWeb.  It is
> also why the real narrative has to be about how contemporary
> (InterWeb, URIs, and EAV) have emerged to solve some really old headaches.
>>> OpenLink built and sold secure ODBC drivers to corporations (still does)
> on the back of a trust graph based on data containers (files) hosting EAV
> content using the old INI notation for graphs. It's how we were able sell
> drivers that handled scenarios like departmental partitioning such that
> payroll, pensions, 401K data etc. was protected via enterprise specific
> rules i.e., we gave our customers the infrastructure to implement their
> rules etc..
>>> Today, WebID enables us to deliver the same functionality via URIs, EAV
> based Linked Data graphs (RDF and other formats), SPARQL, and user
> trust models (ontologies).
>>> Privacy (where vulnerability is calibrated by the vulnerable) is the
> biggest problem on this planet today, due to InterWeb ubiquity. We now
> critical infrastructure in place for addressing this problem head-on.
>>> --
>>> Regards,
>>> Kingsley Idehen
>>> President&   CEO
>>> OpenLink Software
>>> Web: http://www.openlinksw.com
>>> Weblog: http://www.openlinksw.com/blog/~kidehen
>>> Twitter/Identi.ca: kidehen



Kingsley Idehen	
President&  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen
Received on Tuesday, 3 May 2011 16:19:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:44 UTC