W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

Re: describing, crawling, https schemes, https and session guards, OAUTH

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Sat, 07 Jan 2012 13:42:32 -0500
Message-ID: <4F089218.6070904@openlinksw.com>
To: public-xg-webid@w3.org
On 1/7/12 10:28 AM, Peter Williams wrote:
> The OAUTH approach is one way, and will depend on maturity of your own 
> protocol engine and that in MIcrosoft Azure STS (giving me OAUTH, 
> now). I will play with this as I want to qualify the maturity of each. 
> So far, Ive stayed away from OAUTH, not wanting to be in the first 
> wave attack.

Okay, just get going. Our implementation is pretty mature.

See how OAuth works with a Virtuoso SPARQL endpoint at: 
http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtOAuthSPARQL .

Additional responses follow below...

>
> How do I take cert:key and make a business out of it, is the next 
> issue. For there starts the money flow (which pumps up semantic web 
> beyond its R&D stage).
>
> But the point is that im turning the whole "about" (sparql describe) 
> concept of the semantic web to suit an enterprise-centered security 
> policy. (This is one Henry doesn't grok, at all; just not wanting such 
> a world to exist, I suspect). Im only interested in the semantic web - 
> AS ITS DIFFERENT in a paradigmitic sense, not (yet) becuase its a 
> different bit of middleware for app building. First, it has to show 
> that its presents a new plumbers paradise. It has to have invented 
> stainless steel plating, so the world is no longer populated by rusty 
> old iron faucets. It doesnt need to reinvent the faucet itself, 
> though. I think you did that, because of the consistency of the 
> meta'ness. You didnt get side-tracked into webAPI with REST, or 
> changing how trust should happen in the world, or any of the other 
> dogmas (that come later, in my world).
>
> Another option is to use VPNs. Azure comes with "cloud-easy" VPN 
> technology for middleware builders. Its NOT the old cisco-layer2 
> dialup concept. Its about hooking a cloud presence to the enterprise 
> presence, so servers can be in each location on the same subnet. Since 
> its targetting Windows programmers (like me, not the most intelligent 
> members of the species), it has to "just work" and controlled cost 
> (and typically be cheaper than a $100k US person doing linux), and 
> work with legacy apparatus of several generations. Specifically, it 
> allows cloud hosted intances in that highly controlled space to talk 
> to: the enterprise directory, and enterprise hostsed SQL servers. This 
> allows me to have my cert minting server in the cloud, for example, 
> but talk to N "domains" (actually, unlinked forests), each one minting 
> certs using said central minting server using enterprise group policy, 
> and auto-populating the certs  (with SANs) ready for use, as 
> controlled by the domain manager (not the central minting service). I 
> dont need VeriSign to do this kind of scaling for me, I can do it 
> myself, now, courtesy of (advanced deployment modes) of windows and 
> Windows Azure.
>
> I could also be dropping a (simple) endpoint into your server farm 
> (its an install that fashions a virtual network adapter), providing 
> thereby a route to that which is not present on internet visible 
> endpoints. It could be giving AUTHORIZED crawler thus a 
> private  endpoint, over which it then crawls such authorities. The VPN 
> method would be doing the security enforcement rather than OAUTH, 
> based on standard endpoint control.
>
> Lets call that what it is : a good ol bileral agreement for 
> information sharing, enforced using endpoint visibility. But, the 
> point is that the cert:key triples that then get published on the web 
> (by the likes of linkeddata.uriburner.com) would only be made 
> available (in their proxy URI named  form, and on linked data 
> endpoints served by OpenLink cloud) tio those the original document 
> declarse. I.e the linked data data MUST enforce that, for the triples 
> whose subject is the proxy URI.
>
> Now do this, we start to have commercial security, that does not 
> tamper with the semantic web concept, and still exploits its paradigm 
> shift. we start to have the conditions necessary for decent uses of 
> certs:keys, anbd have moved beyond a wikileaks fuck the world type 
> attitude of cert:keys. While thats fine and useful as a boostrap, it 
> has to move on and address more conservative business needs, to keep 
> the ball rolling. Otherwise the giant boulder will come to a(nother) 
> rest (sic).

Yes, you can look at this as Linked Data bringing the power of the 
Semantic Web technology stack to policy based access for Extranets :-)

Some history, we've always used policy based data access (plumbing 
exploitation pattern)  to differentiate our middleware products (from 
ODBC, JDBC, OLE-DB, ADO.NET all the way to Linked Data) . This is how we 
answer (till this very day) the question: why would someone pay for your 
ODBC, JDBC etc.. drivers when they can get them for free from a DBMS 
vendor?

The same questions will be asked of WebID. And it will boil down to "it 
just works" vs $100K+ support and consulting gigs.

-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen








Received on Saturday, 7 January 2012 18:45:30 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 7 January 2012 18:45:31 GMT