W3C home > Mailing lists > Public > public-lod@w3.org > June 2011

Re: Squaring the HTTP-range-14 circle [was Re: Schema.org in RDF ...]

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Fri, 17 Jun 2011 13:13:12 +0100
Message-ID: <4DFB44D8.3020609@openlinksw.com>
To: public-lod@w3.org
On 6/17/11 1:46 AM, David Booth wrote:
> I agree with TimBL that it is*good*  to distinguish between web pages
> and dogs -- and we should encourage folks to do so -- because doing so
> *does*  help applications that need this distinction.  But the failure to
> make this distinction does*not*  break the web architecture any more
> than a failure to distinguish between male dogs and female dogs.

Instead of *break* what about compromising or undermining flexibility 
implicit in AWWW? This is tantamount to obscuring the WWW potential 
relative to its broad user constituency.

Re. schema.org, I don't regard their effort as breaking, compromising, 
or undermining AWWW. I simply believe they are taking baby steps that 
are 100% defined by their current business models. Rightly or wrongly 
so, they have to protect their business models. In a sense, the same 
applies to academia and its model where grant funding is vital to 
research projects.

What is dangerous though, is encouraging people to misuse and 
misunderstand AWWW. Names and Addresses are distinct items. AWWW essence 
depends on preserving this vital distinction.

When there are more applications (+1 to Henry's comment about focusing 
on Linked Data apps and viral patterns) this lower level matter will 

Although not present (I am too young) I am certain similar arguments 
arose during the early days of silicon based computing between OS 
developers and programming language developers. I certainly know these 
conversations did arise when Spreadsheets vendors tackled Cell Reference 

There are many useful cases in plain sight that many overlook re. power 
of URIs as data conductors, integrators, and access mechanisms. I think 
(based on my experience with this community and industry at large) that 
there is too much focus on reinventing too many parts of the consumption 
stack, from scratch. The key is to be "useful" but introduce 
"usefulness" unobtrusively if you really seek uptake. Naturally, this 
requires understanding of what already exists (i.e., domain and subject 
matter knowledge) and functionality areas addressed by existing 
solutions. Sorry, but if all you do is program, you cannot really 
understand the reality of end-users.

I like to make reference to Apple as a great anecdote because they've 
risen from near demise to the vanguard of modern computing by exploiting 
the InterWeb from the inside out, they don't see the Web as simply being 
about HTML. They understand that its a linked information space and 
future data space. They utilize this insight internally in a manner that 
just manifests as being "useful" to its ever growing customer base.

Remember, there's a lot of old NeXTStep still underlying what Apple 
does. Also remember, the WWW was built on an NeXT machine with a lot of 
inspiration from how its innards worked. Believe it or not, we are still 
playing catch up (circa. 20011)  with NeXTStep and Unix in general re. 
really smart and useful Linked Data apps :-)

Embrace history and the future gets clearer and much more exciting. We 
have an unbelievable opportunity within grasp. We can embrace and extend 
(in a good way) what we may perceive as imperfections by others (e.g. 
schema.org). As Pat stated in an earlier post, these imperfections 
present opportunities that might even span decades before the behemoths 
out there hit their respective opportunity cost thresholds. Once said 
thresholds are hit they will respond accordingly via product fixes 
and/or enterprise acquisitions etc..

Contrary to popular belief, I will state once again that HTTP 303 is the 
poster child for ingenuity inherent in the HTTP protocol and the AWWW.  
Yes, we could also up the semantic smarts on clients and let a retrieved 
resource disambiguate Names and Addresses, but that only adds a burden 
to a target audience that's already challenged re:

1. recognizing linked data structures via directed graphs
2. recognizing that linked data structures have always been about links 
and that HTTP URIs are a powerful vehicle for expanding this concept to 
InterWeb scales
3. recognizing that de-reference (indirection) and address-of operations 
are achievable via URIs and cost-effectively so via HTTP URIs due to WWW 
4. understanding that RDF is *an option* for linked data structures at 
InterWeb scales, you can use other syntaxes without losing access to 
really useful stuff like RDFS and OWL semantics (which also suffers from 
over emphasis on RDF at expense of core syntax agnostic concepts).


1. http://en.wikipedia.org/wiki/Spreadsheet#Cells
2. http://en.wikipedia.org/wiki/Spreadsheet#Named_cells .



Kingsley Idehen	
President&  CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen
Received on Friday, 17 June 2011 12:13:39 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:29:54 UTC