W3C home > Mailing lists > Public > public-xg-webid@w3.org > November 2012

Re: What is a WebID?

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Mon, 05 Nov 2012 22:03:15 -0500
Message-ID: <50987DF3.2020706@openlinksw.com>
To: public-xg-webid@w3.org
On 11/5/12 6:35 PM, Nathan wrote:
> Kingsley Idehen wrote:
>> On 11/4/12 1:18 PM, Melvin Carvalho wrote:
>>> Our solutions are interoperable. Universal does not mean unique!
>>
>> Wrong again.
>>
>> The solutions in question (re. WebID) are no longer interoperable. A 
>> verifier will fault on a hashless URI. It will fault if a profile 
>> document isn't comprised of Turtle content. It will also fault on a 
>> non http: scheme URI.  You seriously regard that as interoperable?
>
> This is interesting.
>
> I viewed the constraints as setting a minimum bar for interoperability.
>
> Let's say HTTP + Turtle + Hash URI was level 1.0 support.

But I wasn't responding to constraints. My concern (hopefully allayed 
since Henry's post earlier on today) had everything to do with a 
definition that would goad verifiers into coding for hash URIs that 
resolve to Turtle content, solely.

>
> Then add in RDF/XML, RDFa, NTriples. JSON-LD to get level 1.8, add in 
> acct: or ftp: to get level 2.2, and so forth.
>
> Each serialization and protocol added to the mix increases the power 
> of WebID-protocol, this is a good thing, not to be precluded in any way.

Yes, but that's not how I interpreted the effects of the WebID 
definition pushed at TPAC. Anyway, as stated above, I think my concerns 
are now allayed following the session we had with Henry earlier today 
and his subsequent summary post.

>
> The Hash-URI thing is a different issue, there are multiple reasons 
> they have preference, but it's probably worth me mentioning why I am 
> +1 to having hash-http-URIs as the "default" for level 1: It's because 
> I see WebID as tying a URI to both parts of a key pair, the TLS side 
> binds the URI to the private part, the act of dereferencing ties it 
> the URI to the public part, and the public part is already tied to the 
> private part. If a slash URI <a> redirects to another document <b>, 
> then it's <b> that is tied to the public part, not <a> that's in the 
> cert. This to me, opens a lot of questions, and feels like it opens 
> the door to exploits, mitm attacks, and doesn't "prove" uri 
> ownership/control. Hence why I have a strong a want for #hash URIs 
> here. If there's no problem with the redirects and the proofs all work 
> out / it's all good, then I'm happy with either (personal preference 
> will always be hash's of course).
>
> Make sense?

Yes, I don't have an issue with Turtle or hashless URIs as the baseline. 
I only have an issue when they become rules that are forced upon 
end-users and developers.

I don't ever recall publishing a Linked Data example that wasn't based 
on Turtle (lately) or N-Triples (in the past). If you lookup some of the 
old SWEO archives you'll see how I've always pushed for Turtle as an 
alternative to RDF/XML for all RDF related examples in W3C specs :-)

I am only preoccupied with good interfaces and real openness. I believe 
quality always rises to the top, so I never believing in forcing stuff 
on anyone, the technology (if any good) will ultimately always speak for 
itself. Linked Data, RDF, Turtle, WebID has always been awesome, in my 
eyes.

>
>


-- 

Regards,

Kingsley Idehen	
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen







Received on Tuesday, 6 November 2012 03:03:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 6 November 2012 03:03:39 GMT