W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

RE: social recasting and webid - an asde on of RE: Mandatory client supported serializations

From: Peter Williams <home_pw@msn.com>
Date: Sun, 1 Jan 2012 21:32:01 -0800
Message-ID: <SNT143-W15338262D682EAFF33315A92910@phx.gbl>
To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>

In 20 days or so, i wont be here (giving Henry what he wants). Im avoiding renewing some bureacratic W3C technical expert status process, thus making it lapse. I dont have to be thrown out; it will just happen. 

 

 

Now, in that 20 days, we get to set some scope. Im pretty happy that, with a months perseverance, it was possible to mostly deliver webid using really basic components. Having proved that, one can them imagine the world, where professional engineering takes over (and some real cash drives several rounds of productization that tune, and tune, and focus and focus). This is where I stop playing programmer, and starting being a CISO again, picking investments and strategic winners.

 

When I joined in with FOAF+SSL 2-3 years ago, it was on the observation that of ALL the various technology on display, the sparql server seemed the MOST obvious, low-hanging, highest quality thing the movement had produced. But, every attempt to make use if it always walked into a dogma wall: why not do something else (like edit a blog post, or a file on an apache server, or talk about session REST APIs). Its nice to see, once one has done the blog post thing, that one does ultimately comes back to the "managed" service for the more advanced use cases - which at its core is your sparql cluster.

 

Now, folks WOULD discuss that one COULD build agent-agent protocols, out of sparql. And, as a generic agent hosting all manner of such agent-agent protocols, the sparql server could be implementing each agent-protocol X, as "Contructs" got sent between agent X instances.This is all what drew me to the semantic web in the first place, as it seemed the kind of thing that DARPA WOULD conceive and promote (knowing how DARPA thinks about strategic technology initiatives).

 

This all seemed VERY powerful, and doable; with the web taking on some of the role that formally would be given over to IETF doing "internet" infrastructure. All it required was to complete the product - enabling the generic agents X, Y and X to have muiltiple identities with distinct keying (one per agent type, much like a cisco router is 99 agents and 99 sets of keys and identifies , for its role as an eigrip router, a vpn concentratr, a bridge loop detector, an arp resolver, an dhcp pool agent, a  SIP UA, a frame relay dlci mapper, a vlan trunker, and ...(92 more features)...). Then should a browser with a pipe to another browser (e.g. websocket) starting send real time X over said pipe, the "setup" protocols might be going over the sparql server "internetwork", enforcing things like access controls, address hiding,  encryption tunnels, language translation, and admittance controls, and audit records. The browsers might be talking to each other over a websocket pipe, but the relays in the middle faciliate - and enforce politics of all kinds. Semantic web for enterprise (where the big money is).

 

I think with the work on last weeks work on multiple profiles, webid was able to cast itself beyond client server protocols - the world of browser users logging on to websites - with semweb trying (yet again) to compete against such websso, or openid. ANd failing (inevitably). There is a change now that the semantic web can just be itself, having distinguished its role in the next generation web. IT doesnt need to be that thing expecting to be called to come in from the cold, once folks hit some wall (that they never hit); and folks are dissapointed every time its overlooked.

 

While I agree with Henrys position that the webid spec should be about some simple tangigle and current topic (site logon, and websso), its a shame if the incubator does not in its final report capture the wider mission. I think I would want to read that yes, eerything deployed for a simple but REALLY characteristic problem (logging in, with a decentralized id mgt system) is REALLY done as proving ground, that the same principles will then do X Y Z (that are actually much more important, systemically, for the future web). I want to see the case made that this particular study is emblematic of a whole class of related control problems, that now get solved (once some focus is applied).

 

 

 

 

 

  		 	   		  
Received on Monday, 2 January 2012 05:32:38 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 2 January 2012 05:32:39 GMT