W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

RE: Important Question re. WebID Verifiers & Linked Data

From: Peter Williams <home_pw@msn.com>
Date: Wed, 21 Dec 2011 10:34:37 -0800
Message-ID: <SNT143-W24B52ECF105C2898083EF492A50@phx.gbl>
To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>


 Peter did reintroduce it here (before Henry worked to put it the "deny its an issue till proven its problem" bucket). but, I only drew on an analogy, initially (not hard reasoning). What I had done in my code in similar circimstances when pulling an XRD file in thre cases (XRIs, http URIs, the "HXRN" URI/XRI hybrid of the https scheme)? i then asked myself should I do the SAME, in the webid validator? I asked, are the problem and issue sets the same, or related? If you look at my code (the implementation that doesnt exist (False), and doesnt work for 100% of relevant cases (true)) I took the easy way out. Given a SAN URI from a trusted cert (only), I webget the resource, using DEFAULT get semantics of the windows/dotNet library (as this impacts caching etc). if the resouce replies 200 (and nothing else), I allow the SAN URI onto the possible webid URI list. Otherwise, its ignored. I wrote a filter that is, relying on proxies, ISP, governments and the web generally to not lie about 200s (ok that was stupid security assumption, but there we are...) Then I asked a data morphing proxy service to convert remote doc formats into a reader format my reader class can handle (xml/rdf). Many threats exist in that handoff, since the proxy and moprher can lie. but, another stupid securityi assumption, and there we are again. Im trying to be in web land (not military land); and find the "right" balance. now, the XDI erquivalent answer that  Kingley gives IS THE RIGHT answer. But, note its implication. It requires we use OWL in the validation process, necessarily.
And, thats why I joined this incubator - not to drivel on about low level stuff I heard 3 years ago. I want to know, since W3C dissed OASIS on XRD, what the illustrils web architects have as the equivalent. I know its there, and I trust this lot on that topic. But, I want to see it on paper (so I can code it). > Date: Wed, 21 Dec 2011 13:08:34 -0500
> From: kidehen@openlinksw.com
> To: public-xg-webid@w3.org
> Subject: Re: Important Question re. WebID Verifiers & Linked Data
> 
> On 12/21/11 12:55 PM, Mo McRoberts wrote:
> > On 21 Dec 2011, at 17:47, Kingsley Idehen wrote:
> >
> >> I used to think so until Henry expressed questionable suggestions about URI handling that breaks the abstraction re. WebID verifiers.
> > I’m think it was actually Peter initially, but I could be wrong; Henry just revisited the issue, and took a safe (from a security perspective, if broken from a web arch angle) default position.
> >
> > I’m not sure why that prompted this whole thread. Just saying “redirection (and indirection!) are a fundamental part of web architecture, we just need to settle on how they’re handled from a security perspective” would’ve been a perfectly decent answer to Henry’s question…
> >
> > M.
> >
> 
> Here is how I would frame a security problem (something I've done in the 
> past).
> 
> An owl:sameAs relation exists in a graph somewhere along the 
> de-reference trails. A verifier follows the link and finds match. Or 
> said verifier applies inference and makes a union and then gets a match. 
> In either case, one deftly placed relation have tipped the apple cart.
> 
> Solution: implementers of WebID verifiers have to factor in crawl depths 
> and relation semantics. Suggestion could go as far as seeking signed 
> claims for specific relations. BTW -- this doesn't have to be part of 
> the WebID spec, it's just a note for engineers.
> 
> The ultimate challenge for WebID is this, you are going to have 
> variation re. product quality. That's fine, a spec can't control actual 
> engineering, it can only provide the specs for the act of engineering.
> 
> The Internet was broken security wise before the WWW came along. WebID 
> has a great shot of fixing this problem, but it really has to understand 
> and honor the age-old practice known as separation of powers.
> 
> The WebID spec shouldn't be about encouraging implementations that are 
> fundamentally technology Camels -- the usual product of attempting 
> innovation by committee. A spec must sit distinct from implementation 
> engineering.
> 
> -- 
> 
> Regards,
> 
> Kingsley Idehen	
> Founder&  CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> 
> 
> 
> 
> 
> 
 		 	   		  
Received on Wednesday, 21 December 2011 18:35:15 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 21 December 2011 18:35:15 GMT