W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2011

Re: WebID-ISSUE-10 (bblfish): Hash URLs for Agents [ontologies]

From: Sarven Capadisli <sarven.capadisli@deri.org>
Date: Sun, 30 Jan 2011 13:21:04 -0500
To: Henry Story <henry.story@bblfish.net>
Cc: WebID Incubator Group WG <public-xg-webid@w3.org>
Message-ID: <1296411664.2004.41.camel@csarven-laptop>
On Sun, 2011-01-30 at 01:53 +0100, Henry Story wrote:
> On 30 Jan 2011, at 01:32, WebID Incubator Group Issue Tracker wrote:
> 
> > On 29 Jan 2011, at 21:04, Peter Williams wrote in the archived mail
> > http://www.w3.org/mid/SNT143-w25D1D87B1A483EF3B8536E92E00@phx.gbl
> > 
> > What I really liked about the use of RDFa in the FOAF+SSL pre-incubator world was that the good ol' home page could easily be foaf card, and thus the home page URI is a webid stem. To the average punter (who will rarely understand the significance of #tag on the end), the home page URI is a webid.
> 
> For people who are just joining consider the graph here:
> http://www.w3.org/2005/Incubator/webid/ED-spec-20110121/#publishing-the-webid-profile-document
> 
> You will see that the web page has a different URL to the person. That is because you can
> ask the question of how many characters are on the Profile Page, but it won't make sense
> to ask how many characters are on Joe, and even if it does, the answer will usually 
> be different. So logically there are good reason to have different URLs for each.
> If you give the same names to two things you can get a lot of confusion [ anybody a link to 
> a comedy sketch that makes use of such a situation? ] And in fact in the semweb where
> things are defined precisely you can prove that this is wrong.

This is precisely true in context of machines, but, not necessarily
false for humans. The human languages simply rely on the contextual
usage in order to make sense of multiple things with the same name. We
simply look for more clues than what's presented to us on the surface.
And we are pretty good at making those distinctions by applying our own
heuristics on the go given any information we can get a hold of (e.g.,
context of the conversation, body language, previous experience)

>From the human social point, all of homepage, webid, user account (and
more) may very well be http://foobook.com/dude and I think that's
perfectly fine. It doesn't help to evaluate the correctness of social
languages (system A) using technical languages (system B).

Hence, I naturally agree with the following:

>  Some points to notice:
> - The end user, mom and pops, won't ever see a WebID. It will be hidden in a certificate. If they see anything it will be a home page.
> - The WebID server logic will be mostly hidden in libraries

as it leads us in the right direction.

> So the only person this could be an issue with is the producer of the RDF.
> If the RDF is generated automatically, then this won't be such a problem,
> which is why RDF/XML and Turtle (please all learn turtle) have a long life
> in front of them.
> 
> So the issue then is with the html developer. I think he can be taught. If he
> does not do it right, it won't be a disaster immediately. One day it could
> make his life awkward...
> 
> > The is no way in a million years I'll get even 2 realtors to ever use the foaf-generator sites and tools listed on the wiki. Getting them to add a paragraph of special html markup interspersed with normal paragraph form...is quite  feasible. Its a template, and we can give it to them.
> 
> We should improve the documentation as stated. I think WebID test suite will help.
> Realtors seem unlikely to me to be building their own solution to this. My guess
> is that they will buy some solution. That solution will help them do the right thing
> easily.
> 
> > This RDFa argument for foaf cards mattered to me. It was like the "add sound file to mosaic browser" moment, succesfully dumbing down stuff for the mass of folk without prevent the technical standards doing their thing, just as the experts here define.
> 
> 
> How did it matter to you other than in a theoretical way? As I pointed out
> people can make mistakes, that won't break things immediately. But since we know
> the best way to do things right, we might as well specify it. People will make their
> mistakes whatever we do, but they won't be able to blame us.

I wouldn't put it in terms of blaming, but I agree with your point.

We have an obligation to specify things as accurately as possible. When
there are mistakes, which is inevitable, we'd use other means to correct
them or close the gap on what's intended using heuristics in libraries.

All of this is based on the premise that 'the task of verifying the
soundness or even completeness of potential WebIDs is outside the scope
of where they are claimed'. I think this is also is inline with the idea
where everyone is free to claim anything and at any amount in the RDF
world.

> People have managed to use the web and not understand the basics of how it works.
> It just cost them over time. Imagine a news site that changes the URLs to its
> articles. Doing that will break all incoming links, discouraging people to point
> to them, and so reducing their long term value. There are many other examples. 
> The W3C architecture group produces some fine documents whose authoritative power
> lies not in the force of human law - nobody will stop anyone building their
> broken web site with missing links all over the place - but in the value to the
> user of doing things the right way
> 
>   worth reading btw: http://www.w3.org/TR/webarch/
>  
> 
> Henry
> 
> Social Web Architect
> http://bblfish.net/
> 
> 


-Sarven
Received on Monday, 31 January 2011 08:24:55 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 31 January 2011 08:25:59 GMT