W3C home > Mailing lists > Public > public-xg-webid@w3.org > November 2012

Re: What is a WebID?

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Sun, 04 Nov 2012 14:10:14 -0500
Message-ID: <5096BD96.5050602@openlinksw.com>
To: Melvin Carvalho <melvincarvalho@gmail.com>
CC: public-xg-webid@w3.org, "public-rww@w3.org" <public-rww@w3.org>
On 11/4/12 1:18 PM, Melvin Carvalho wrote:
> On 4 November 2012 19:06, Kingsley Idehen <kidehen@openlinksw.com 
> <mailto:kidehen@openlinksw.com>> wrote:
>     On 11/4/12 7:46 AM, Andrei Sambra wrote:
>         Hi all,
>         I suggest to go back to the minutes from 30/10, and look at
>         what arguments were presented then.
>         http://www.w3.org/2012/10/30-webid-minutes.html
>         The main reason why we decided that WebIDs must be hashed
>         URIs, was to differentiate between URIs referring to
>         users/agents and URIs referring to documents (hashless URIs).
>         For more details, take a look at httpRange-14 issue:
>         http://www.w3.org/2001/tag/group/track/issues/14.
>         The reason why we decided to make turtle mandatory was to try
>         to align ourselves to the LDP spec, since it's in both our
>         interests to do so. The main argument here (raised by TimBL)
>         was that we should focus on moving forward towards a WG, and
>         trying to support as many formats as possible (at this point)
>         will hold us back.
>         I know it's difficult for some of you to understand why these
>         changes are happening, but please everyone, just go and reread
>         the minutes. It's all in there.
>         Andrei
>     Reading the minutes doesn't change anything at all.
>     The definition is utterly broken. This is a total disservice to
>     this endeavor.
>     There were 16 +1's for this broken definition. Nathan asked the 16
>     +1'ers to defend their positions. Thus, far nobody has made a
>     cogent case for compromising the essence of AWWW and Linked Data.
>     If you believe in something, make a logical case for it. Thus far,
>     there is no logical case for compromising the essence of AWWW and
>     Linked Data en route to Web-scale verifiable identity.
>     Those of us that oppose this broken definition are ready to defend
>     our positions.
> Note: in the minutes I was the *only* person not to +1 this, but after 
> some thought I changed my mind and here's my analysis
> The technology we use has not changed.

It has, a critical requirement has changed. We can no longer use opaque 
URI that denote entities. That's a humongous change.

>   We still have complete, universal, tolerant structures using URIs 
> that obey the law of independent invention.

No you don't.

> Our solutions are interoperable.  Universal does not mean unique!

Wrong again.

The solutions in question (re. WebID) are no longer interoperable. A 
verifier will fault on a hashless URI. It will fault if a profile 
document isn't comprised of Turtle content. It will also fault on a non 
http: scheme URI.  You seriously regard that as interoperable?

> On branding it's changed before and it can change again.  Is not a 
> huge deal to me personally.

It isn't going to be changed so trivially. Just watch. We've re-entered 
the RDF (Reality Distortion Field) zone, yet again.

> Henry has worked on WebID for some time at his own expense (and has 
> even been to prison for it!).

This has zilch to do with Henry. What have other implementers doing?

> He should certainly be able to suggest branding that he feels he feels 
> comfortable with, and that will be effective in meeting his goals and 
> expectations for the project.

Since you believe Henry is somehow the owner and determining factor of 
what constitutes the definition of WebID, again you miss the point of 
this endeavor. My involvement with WebID has nothing to do with Henry 
(whom I've known for many years), it has everything to do with Web-scale 
verifiable identity based AWWW and Linked Data.

> One of the pros was that it was felt this narrow definition would 
> expediate getting to REC status, either with a WG or by LDP using this 
> as the definition for identity.

WebID != Identity. It is a mechanism (hopefully) for Web-scale 
verifiable identity via the combined use of Identifiers, security 
tokens, and an authentication protocol. The authentication protocol 
exploits entity relationship semantics and logic.

> Another pro is that it simplifies test suites.

No it doesn't. You can make technical specs the include implementation 
guides that form the basis of test suites without utterly turning the 
endeavor on its head.

> Another is that WebID has a beach head in facebook, making it 
> potentially one of the largest identity systems on the Web, though 
> Henry didnt want to play that aspect up until there is a deeper linked 
> data integration.

Facebook is already a large publisher of Linked Data [1]. I am sure you 
noticed, they haven't made any song and dance about Turtle or anything 
like that. They simply have Linked Data as an option for Facebook Graph 
API developers, that's it.

> I personally like general definitions for things such as the URIs, 
> AWWW, design issues etc. but I think the feeling was that sometimes to 
> get things done you need to focus.

The are not definitions. URIs are a critical component of AWWW. Linked 
Data exploits URIs en route to enabling webby structured data. RDF 
enables incorporation of explicit entity relationship semantics into 
webby structured data.

>   We still have all the goodness of AWWW we just will need to alter 
> what we call things slightly.

You don't. You are trying to convince yourself of something that's an 
utter fallacy. The definition of WebID being pushed isn't in anyway 
close to AWWW in spirit or essence. It's utterly alien.

>     Kingsley
>         On 11/04/2012 07:29 AM, Melvin Carvalho wrote:
>             On 4 November 2012 12:47, Jürgen Jakobitsch
>             <j.jakobitsch@semantic-web.at
>             <mailto:j.jakobitsch@semantic-web.at>
>             <mailto:j.jakobitsch@semantic-web.at
>             <mailto:j.jakobitsch@semantic-web.at>>> wrote:
>                 hi melvin,
>                 for me the problem is that we now have a political
>             dimension of personal
>                 preferences which cut my personal freedom of choice.
>                 if we award other linked data groups the same
>             behaviour (express
>                 preferences of uri or serialization) the argument
>             about the advantages
>                 of having one kind of uri and one kind of
>             serialization become void.
>                 linked data works with any kind of dereferenceable uri
>             and any kind of
>                 serialization.
>                 if webID only works with hash-http-uris and turtle it
>             is just another
>                 application in the spirit of web2.0 in the special
>             disguise of using
>                 linked data techniques.
>             I really do sympathize with the points you made and I was
>             initially
>             taken aback by this.  But having thought about it, I've
>             warmed to the
>             idea.  LDP is on a REC track and is possibly the group
>             most relevant to
>             our work.  If we can avoid duplication of effort that
>             would be a plus,
>             imho.
>             I really dont think anything has changed.  Give yourself a
>             dereferencable URI and you're "on the web".
>             WebID itself is just a name, and it will hopefully have a
>             URI soon of
>             the form urn:rfc pointing to a spec.
>             So the spec started mandating FOAF then it mandated an
>             Agent, now it
>             mandates turtle.  Things change, and may change again
>             before 2014 when
>             LDP becomes a REC.
>             Is there really a problem with hash URIs?  Redirects are a
>             pain to
>             program.  Ontowiki did object to this but after some
>             thought worked out
>             their architecture may even be better without the redirects.
>             In what way do you think this is in the spirit of web 2.0?
>              It is using
>             a complete generalized and universal platform to solve a
>             specific case
>             in a way that will be interoperable and follow standards.
>     -- 
>     Regards,
>     Kingsley Idehen
>     Founder & CEO
>     OpenLink Software
>     Company Web: http://www.openlinksw.com
>     Personal Weblog: http://www.openlinksw.com/blog/~kidehen
>     <http://www.openlinksw.com/blog/%7Ekidehen>
>     Twitter/Identi.ca handle: @kidehen
>     Google+ Profile: https://plus.google.com/112399767740508618350/about
>     LinkedIn Profile: http://www.linkedin.com/in/kidehen



Kingsley Idehen	
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Received on Sunday, 4 November 2012 19:10:40 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:31 UTC