W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

social recasting and webid - an asde on of RE: Mandatory client supported serializations

From: Peter Williams <home_pw@msn.com>
Date: Sat, 31 Dec 2011 10:03:47 -0800
Message-ID: <SNT143-W79BB850D6DAC066CA19CF92930@phx.gbl>
To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>

so lets address this subtopic, as it abouts W3C and this incubator (and its/our role in identity and trust, of which webid is but one strand of W3C and other IETF activity). Does realty fit into W3C's vision  of the web and therefore webid (since realty a pretty legacy user of IT, and pre-IT, and pre-pre IT, and the world before IT, and the world before books....). lets tie it to equivalency, and things I can produce code for, and demonstrate actual interworking.

 

We know OASIS split from W3C, as W3C culture dit not fit with the business models of certain players. W3C was (is?) pseudo anti-capitalist. Or, in the usual duality, its fully capitalist, and is perfectly happy to re-eingeer legacy businesses and social pracice (e.g. realty). Hallam Baker was perfectly clear to me about the initial agenda of the 3 web founders. He used long words, but they made sense; and had impact. They were about social recasting, and reminded me of German/Russian thinking from 1880s (and especially 1930s fixated Russia or late 1970s fixated-Cambodia), but non the less worked in the web, at scale, in the late 1980s-90s. Henry reminds me of that thinking set, when his normally controlled use of language and tone lapses. The agenda is to recast the basis of trust making (to fit some post party-politics world)

 

Now, in realty, I have no (self)interest in W3C doing any social re-casting via webid, wholly eliminating CAs as a concept, or anything other similar social vehicle. I need publicly accountable TTPs (using any technology you like, we could not care less: its all bits). We are 1 millions TTPs, running 20-30 person trust networks, each one operating over a physical distance of about a 10 mile radius. Somehow, a trillion dollar of cash moves around annual in that framework, and a 2 million kids gets raised on the profits. A few folks buy lear jets, but not many. Much greater than 50% of the corporate body are female, and over 50. When I was 7 and playing with video game computers, they patted me on my head, and said "good boy, now go brush your teeth, Timmy." I need normal human social organization to work, and to project, and for goverments to do their thing (without becoming invasive or pre/pro-scriptive about everything). The latter all supresses local trust formation (and thus the conveyancing of half-million dollar properties). Everyone is looking over their shoulder, at the snitch milking some trivium of incorrectness.

 

 

Will webid make it and  have a story to tell about working with TTPs (in some re-moddeled fashion)? Will the trusted data space change the social structure, collapse (yet more of) a trillion dollar cash flow market? Is W3c a systemic threat, or a saviour?

 

So far it does a bad job. Until the last week, webid was an appalling waste of time; using really hard and immature technology to do what OCSP  on cert-using server systems already does every day, world wide, in a billion systems, during testing of signatures on windows updates and drivers. It was total English bullshit, at its best, delivered with full honors.

 

This week, webid *started* to tell a more coherent story, that made the social re-casting more palatable. I suspect (and belive) folks have had this latent, all along . It started to show the semantic web (a) works with the web that is (b) adds something only recently realizable at sacle from the world of pure logical identity theorem proving, and (c) the rigor of URI identity model could showcase some actual benefit (when linking up profiles, with different TTPs doing the certification of keying in each, creating value chains that can underpin trillion dollar size open market economies).

 

----

 

 

I going to edit my little validator agent experiment today, to make further use of the linkburner sparl-protocol endpoint, doing remote query execution in the course of enforcing the SSL handshake protocol. I assume its owl-enabled, with reasoner. I will need help formulating working queries that exercise some query ...working over 2 profiles, named using 2 SAN URIs from 1 cert, with 1 key. 

 

Your linkburner sparql server has an option to auto-pull owl:sameAs data from the triples *received* from the securely-named endpoint, when its "useful" to do so, in the course of preparing data for local querying. I want to see the utility of  what we have been discussing. I want (as the worlds worst programmer) to be able to do it, so we know its real fo all the class (not just those who get As) . Then the rest of the world can copy it. Using a remote sparql protocol server for this seems ideal, since its now acting as a trust resolver (leveraging equivalence relations). using a sparql server guarded by webid validation is even better, as a "system" of trusted agents start to come into being.

 

I want to be able to test for membership of two profiles in a given equivalence class. The class itself will be distinguishsed using a data URI with certain integrity properties that detect tampering, and webid validation agents (and the SSL handshake) will prove that the profiles do participate in that particular equivalency class. Since the class apparatus is based on the  canonical nature of the public key in the data URI, it obviously scales to any size of equivalencies we want, suit the million person world of (wholly de-centralized) realty trust networks based on the individual (and her 20 local friends, neighbors, and business associates).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  		 	   		  
Received on Saturday, 31 December 2011 18:04:23 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 31 December 2011 18:04:23 GMT