W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

Re: describing, crawling, https schemes, https and session guards, OAUTH

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Fri, 06 Jan 2012 13:50:30 -0500
Message-ID: <4F074276.9020302@openlinksw.com>
To: public-xg-webid@w3.org
On 1/6/12 12:43 PM, Peter Williams wrote:
> You are the only person who has ever delivered the semantic web to me 
> in a tangible form (that then largely does exactly what I was sold, 
> based on a standard browser, no plugins).
>
> I built up my test case with https/http interworking with your concept 
> in mind. How can I impose security guards and also let the semantic 
> web do its thing, is my quandary. I dont hve the terms to say this, so 
> Ill muddle through. in each world (comsec/SSL, semantic web) its easy, 
> but in the combined world we are missing a lexicon.
>
> The world of Henrys efforts (where the address bar says http, but the 
> resource is accessed using https over hiddedn ajax) only makes things 
> hard for the semantic web itself, as now discussed.
>
> For example, my little site allows a user to present session cookies 
> to https guarded resouces (marked for IDP guarding, not merely marked 
> for https). If one has cooked claims from some IDP but a guard rule 
> prevents one from translating the cookied claims into an actual 
> session that result in one accessing the resource (becuase the http 
> context, say, prohibts the guarded resource from relying on the cooked 
> claims), one can always re-present the user cooked claims to a now 
> properly https-contextualized resource. IN so doing, if the user did 
> not appear to have "login status" before, they will now  (without a 
> round trip to the IDP). THe cooked claims were latent in the protocol 
> that is, but were excluded by guarding up to now till the security 
> guards pre-conditions were satisfied. Once satistifed, they work 
> statelessly as if the pre-conditions had been met, all along.
>
> Now Google and MIcrosoft solved all this, even for resoruces that 
> expose a RESTful webAPI (with content-negotiation built in), using the 
> SWT and now json-ecoded form of cooked claims. But this doesnt help 
> me. The uriburner crawler is not armed with SAML, that enables it gets 
> the SWT token from an STS .. that then drives the webAPI guarding the 
> resource, Restfully, and for all variants of content-negotiation. It 
> does have OAUTH, done the webid-integrated way of course.
>
> Ignoring the webAPI issue, I want to focus on the semantic web (not 
> REST), with a bookmarklet that "extends" the browser, quite naturally. 
> The user on said https (& guarded) resource has cooked claims and sees 
> resources in HTML form. On invoking a bookmarklet'ed handoff to such 
> as the uriburner describing service, the data space crawler does not 
> such cooked claims. The bookmarklet can induce uriburner to crawl (and 
> hpefully present its stored description to humans and maclines alike), 
> but it will fail to pass the guard on first attempting to retrieve the 
> GUARDED https-schemed resource. It cannot  describe (and deliver to 
> others from its endpoints) what I refused to share, because of the 
> guard policy.
>
> Of course, this is where oauth come into play, specifically for the 
> semantic web (not for printer user stories). We want the semantic web 
> to do its thing, even for guarded resources.
>  I need the next generation uriburner bookmarklet to also induce 
> claims-delegation to occur, from me (browser-user) to uriburner (a 
> data space crawler), authorizing the  crawler to pass the guard (and 
> THEN handle the result set suitably, in a partioned data space only 
> available to those on my (the delegating party's) follower list.

Yes, we have some of this in place, but nows the time for us to revisit 
this matter.


>
> Im going to build up now the azure OAUTH support. I will need help 
> later on today with the webid/OAUTH modes of ODS, so something 
> actually inter-works.
>
> having enabled the crawler to at least read the resources (using 
> OAUTH-delegated claims), it must know to auto-describe the 
> authorization policy it finds within the resource, AND then enfore it 
> on the meta-resources it opts to publishes to others.
>
> Lets see what works, when webid and authorization meet.
>
> Im sure you have done this, aleady. Im just catching up, to see what 
> works and what works with commodity tools.

We support OpenID, OAuth, and WebID on the client and server sides. 
Anyway, let's see what comes out of this new round of QA tests / 
experiments re. protocol implementation and interop :-)


If the pluming hits a breakage point, we'll fix it. All of these 
protocols have been implemented in the Virtuoso kernel for a while now.
>
>
>
>
>
>
>


-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen








Received on Friday, 6 January 2012 18:50:54 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 6 January 2012 18:50:56 GMT