Re: Using "Punning" to Answer httpRange-14

On 5/15/12 8:53 AM, Michiel de Jong wrote:
> OK, the diagram is very helpful! now we're getting somewhere.
>
> On Tue, May 15, 2012 at 1:52 PM, Kingsley Idehen<kidehen@openlinksw.com>  wrote:
>> https://docs.google.com/drawings/d/1ZUzBa4HjNUXg_OeFudwK0XO70VeJRxJoXv4RW2KamhY/edit
>>   -- illustration of what happens with names and indirection re. Linked Data
> I understand that you say:
> - if you want to publish a link to a document, make sure you don't put
> a '#' in the URL.
> - if you want to publish a link to a sense, make sure that either you
> put a '#' in the URL, or you that the URL returns a 303.
>
> So if i build a client based on your diagram, then that means my
> client will be compatible with hash-uri-rule camp content, and also
> with 303 camp content (provided they never refer to document fragments
> or hashbangs), but not with punning camp content.

Sorta, because I can't categorize your application based on the 
information you've presented at this point. For instance, into what 
category would you place your app. based on the list below:

1. Basic Web app.  -- simply de-references URIs (specifically resource 
URLs) and then processes content (typically HTML format)

2. Linked Data Web app. -- de-references URIs with explicit 
understanding that Linked Data URIs identify description subjects and 
that these URI resolve to EAV graph based content serializable in a 
variety of negotiable formats

3. RDF based Linked Data app. -- same as above, but formats must belong 
the the RDF family of formats, and there has to be understanding of 
relation semantics such as wdrs:describedby, foaf:primaryTopic, 
rdfs:isDefinedBy etc..


You see, 1-2 play well with the Web as it exists. Item 3 plays back into 
the very issues that have stunted comprehension and adoption of the 
Semantic Web Project's world view since inception. That's what I mean my 
"semantic fidelity at the front-door" since it requires acceptance and 
comprehension (on the part of the client developer) of the semantics 
associated with the predicates/properties in question.

When TimBL dropped the original Linked Data meme (what I've always seen 
a GOLDEN) he deftly stayed clear of RDF specificity. At best, it was an 
option. Sadly, as the meme took off (contrary to prior efforts re. 
Semantic Web Project) everyone sought to jump on board to ride this 
juggernaut, and in doing so ended up muddying the waters and 
reintroducing friction laden problems of yore.

>
> Given that most people who publish web content (i.e. web designers)
> have never heard of 303s and hash-uri-rule, that's a big problem.

The big problem is what I describe above. My diagram is about simply 
understanding that you have the following:

1. Web Resources
2. Real World Entities .

You can use HTTP URIs as Names for either.
If you want to see how I demonstrate that, then look at what we do with 
both WebID [1]. and URIBurner [2].

A "big problem" is one that forces people to adopt relation semantics 
such as ignoring the implications of a 200 OK, by using a "Location:" 
header to point to an RDF resource where the graphical content is 
comprised of relations that post translation enable you disambiguate 
Names and Addresses. That's a problematic tax for a technology is a 12+ 
year image problem .
>
> Also, it only works for links and not for document elements like
> <span>  or<h2>  which can also be marked up semantically.
>
> Consider an easy example: someone writes a blog, and adds a
> 'property="author"' attribute to a link the link's href is e.g.
> "http://example.com/author.html". According to your diagram, that
> means a web page wrote the web page.

That person is saying that the author is named: 
http://example.com/author.html . In the document web realm that's fine, 
since my human brain is able to disambiguate the ambiguous name, 
friction is zero. Of course, in the Linked Data realm, there is a 
conflation problem that arises from that pattern, but a Linked Data tool 
could opt (as we do re. URIBurner) to handle the disambiguation.

Now back to the anti 200 OK option re: http://example.com/author.html . 
What do you think will happen here? Note, the realm is a Web of Linked 
Data as opposed to Linked Documents. Basically this is akin to saying: I 
am now in realm where Records are the unit of interest, not the Tables 
in which they reside, using an RDBMS example.
> not what was meant by the
> blogger. so then you submit a comment to the blog saying 'hey, your
> blog is broken!'.

Only a broken Linked Data application would do that.

I am sure some Semantic Web applications might do that, but oh well, 
what's new?

> you do this 2 billion times because there is a lot
> of content out there on the web.

That's not my world view. Again, look at URIBurner, the second Linked 
Data application we built after our Linked Data browsers .

> the blogger reads your comment,
> learns about linked data, apologizes to you, and quickly phones up
> godaddy where her blog is hosted, and ask how to put a 303 on
> "http://example.com/author.html". godaddy says they don't know what
> she's talking about either, so in the end she opts for the easier
> option of changing the link to "http://example.com/author.html#". now
> your client works again.

Again, see comments above re. Linked Data middleware as exemplified by 
URIBurner.

>
> in the end your client will become like the new IE6. people who use it
> will have to complain a lot to webmasters, asking them to change
> existing content in order to comply with its weird non-mainstream
> quirks.
>
> Do you see the problem? Jeni explains this problem in her blogpost. I
> find it a convincing argument to stop trying to make 303s and
> hash-uri-rule obligatory.

The point isn't about making anything obligatory. The point is about 
leaving the AWWW as it is. It handles the problem well. The fact that 
people get it wrong doesn't mean you break the system.
> the standards should work with the existing
> content out there as much as possible. Do you not think so?

Of course, and that's the fundamental point ! Don't break the Web! 
That's why I said, following a chat with TimBL circa 2006, we did the 
following:

1. commented out our code for internal redirection -- which is the same 
thing as punning effect
2. adopted the 303 heuristic for Name / Address disambiguation
3. built middleware to deal with existing content so that from day one 
there would be a massive Web of Linked Data based on what already exists 
on the Web.

Links:

1. http://uriburner.com -- Linked Data Middleware
2. http://ode.openlinksw.com -- Linked Data Browser and Extensions for 
working with our Linked Data middleware
3. http://id.myopenlink.net/certgen/certgen.vsp -- How you make WebIDs 
using existing Identity claims from existing Identity Providers.

>
>
> Cheers,
> Michiel
>
>


-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Received on Tuesday, 15 May 2012 13:23:02 UTC