W3C home > Mailing lists > Public > public-webid@w3.org > November 2012

Re: WebID definition - take 2

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Mon, 05 Nov 2012 14:00:51 -0500
Message-ID: <50980CE3.1020507@openlinksw.com>
To: Henry Story <henry.story@bblfish.net>
CC: public-webid <public-webid@w3.org>, Tim Berners-Lee <timbl@w3.org>, Alexandre Bertails <bertails@w3.org>
On 11/5/12 12:12 PM, Henry Story wrote:
> Kingsley and Jürgen were quite strongly -1 on the TPAC definition, and Kingsley even made this known on the IRC at the time ( one hour after the discussion, since he was not present ). I just spent some time with them to see where he would agree as the arguments on the list were very long http://lists.w3.org/Archives/Public/public-webid/2012Nov/0000.html
>
> I. Agreement:
> -------------
>
> We agreed on the following two points:
>
> (1) A WebID is "A URL that denotes an Agent - Person, Software or Organisation."
>
>     I think we probably have 100% agreement on this everywhere.
>     Note that we use _U_R_L_  here as this makes it clear that the Identifier is dereferenceable.
>
>     ( Also the idea was that this is easier than defining an HTTP URL, since one would then be left over with HTTP or HTTPS or ... ) The URL concept captures the most important part of the Web which is that you there is a canonical way of dereferencing the URL and getting the document that it names. It should be possible then for the LDP folks to define the HTTP subset of WebIDs as being the ones they are interested in.
>
>     Also I would like to use Tor's .onion and i2p's .garlic  urls in order to be able to make a strong case for the value of having social networks in those anonymised systems. This is really important in the security space. ( it turns out that those are also http urls - but that's a bit of a hack that they are ! )
>
> (2) "The WebID when dereferenced MUST return a document/representation that describes the URL referent uniquely."
>
>    This makes it possible to use different forms of authentication given different identifying descriptions:
>
>    a) Using openid: { <#me> foaf:openid <> }
>    b) Using email:  { <#me> foaf:mbox <mailto:joe@name.com> }
>    c) using a postal mail: { <#i> :office [
>                                       :street "32 Vassar Street";
>                                       :street2 "MIT CSAIL Room 32-G524";
>                                       :city "Cambridge";
>                                       :postalCode "02139";
>                                       :country "USA"
>                                      ]
>                              }
>    d) using a telephone: { <#i> :office [  :phone <tel:+1-617-253-5702> ] . }
>    e) using a public key { <#i> cert:key .... }
>
> As I understand the main reason we shifted from the current definition which was restricting WebIDs to public keys, was to be able to allow wider range of definitions and so a wider number of authentication protocols to be associated with a WebID.
>
>    Notice that all these graphs describe the user with some inverse functional property or owl key, which is what makes these "uniquely identifying" description, or what is known in philosophy/logic as Definite Descriptions.
>
>   The public key of course has a lot of privacy advantages over the others options... but that is what will be described by the WebID Authentication Protocol over TLS.
>     
> So with (2) we still have an identifier that does not require backchannels to give the sense of the referent: ie: something that can be used to uniquely describe him. These are URLs so they are also linkable in a meaningful way so the Linked Data folks will be happy - as they should be.
>
>
> II. Disagreement
> -----------------
>
> I think the disagreements were:
>
> 1) limiting to only http URLs or https URLs
>
>   I think it is difficult also to make limitations like this. There is no reason why sub protocols cannot make
> restrictions there - like LDP. At the same time I think that a WebID has to be a URL. It has to be able to find a
> document that can be dereferenced - otherwise there is really nothing much one can do with this.
>
>
> 2)  Turtle as default representation
>
> Tim Berners Lee and Alex Bertails was clearly strongly in favour of Turtle being the default representation that must be returned when requested, in order to foster interoperability.  A lot of people agreed with that argument.
>
> Until now the WebID spec has tried to be open, since the advantages of having rdfa returned are clear to many, and also since we could make a lot of other formats GRDDLable. The idea there was to allow as many people to publish WebIds as is possible and not get into religous wars over syntax with them. This pushed the burden of interpretation to the verifying agent.
>
>    The question then is where should the interpretation burden be: on the WebID writer, or on the verifier?
>
> 3) #urls
>
>    I think there are good reasons for the WebID to be restricted to  #urls, especially as this would remove all the problems we have with HTTP redirects and the question as to what that would mean for the strength of authentication, and there is no need to look into the whole http-range-14 issue. This also makes it easy to explain WebID by reference to the URI/L spec?
>
>
> III. Conclusion
> ---------------
>
> I think it is ok to add II.2 and II.3 restrictions above if it helps make adoption easier.
>
> The best way we can tell in fact is to see how current implementation agree or disagree in that
> space. I think writing a WebID over TLS validator will probably help there.
>
>
>
>
> Social Web Architect
> http://bblfish.net/
>
Henry,

Great summary.

I would like to clarify an important item that hasn't quite come through 
in your summary, with regards to hash URIs and Turtle. First off, the 
pragmatic utility of both is obvious and a no-brainer to us. Our concern 
is all about their insertion into the WebID definition when the desired 
pragmatic benefits are most effective when used to drive implementation 
guides and exploitation examples.

A WebID verifier should never be coded specifically for a hash URL. It 
should simply de-reference the URL in the SAN of an X.509 cert. and then 
process the data retrieved, if it can.

Please note, the statement above doesn't imply that Turtle can't be 
strongly encouraged (via SHOULD or MUST) as the minimal format to be 
supported by publishers of profile documents and implementers of 
verifiers. The reason for Turtle's viability, in this regard, boils down 
to it being the format that best negates the following Linked Data 
exploitation hurdles:

1. domain ownership
2. dns server access and appropriate privileges
3. web server access and appropriate privileges
4. use of URLs to unambiguously denote entities associated with 
description (or descriptor) documents.

All we need to do is use Turtle and hash URIs for WebID implementation 
and exploitation examples [1][2][3].

Links:

1. http://bit.ly/MYLsMu -- DIY Linked Data deployment using Dropbox
2. http://bit.ly/O4LNKf -- How to create and control your own verifiable 
digital identity, at Web-scale
3. http://bit.ly/NzfyF0 -- Facebook and Linked Data analysis note I 
wrote a while back .


-- 

Regards,

Kingsley Idehen	
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen






Received on Monday, 5 November 2012 19:01:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:05:45 UTC