Re: Why Linked Data?

On 2/10/12 7:14 AM, Melvin Carvalho wrote:
> On 10 February 2012 13:10, Kingsley Idehen<kidehen@openlinksw.com>  wrote:
>> On 2/9/12 3:52 AM, Melvin Carvalho wrote:
>>> I've had a lot of feedback on the Web Credits spec so far and I think
>>> there is a common question.
>>>
>>> "Why use Linked Data"
>>>
>>> I'm unsure as a standards community we've explained the case well
>>> enough to those that may be relatively new to the concept.  Perhaps
>>> it's something we take for granted.
>>>
>>> I was wondering how we can make the case for linked data and whether
>>> that motivation should be translated into our documents.
>>>
>>> Perhaps when things are still relatively early in standardization
>>> process (Community Group) we should focus some time on discussing the
>>> advantages of the LD approach to standarization?
>>>
>>>
>> Melvin,
>>
>> As they say, "horses for courses .."
>>
>> Linked Data has to be explained in different ways to different audiences.
>> Thus, there is no harm in describing the virtues of Linked Data in  ways
>> that specifically apply to payment systems and the redefinition of money in
>> general.
>>
>> As you work through you IOU system points of Linked Data virtue will
>> materialize in ways that simplify value proposition articulation etc..
> The motivation of frictionless payments is to open up value creation.
>
> The old mantra of websites:  create value then create income
>
> What about apps?  Create value then create income
>
> What about individuals?  Create value then create income
>
> Reduce the barriers to creating value and creating income and everyone
> benefits.  In fact, we could see an explosion in new productive
> activity!

Yes, but this is still about application logic orchestrating data 
access. Thus, you have to align closely with the age-old concept of 
decoupling application logic from data access. Then after that, 
separation of data access protocol from data representation formats. 
Finally, you hit loosely coupling of schema and actual data representation.

The items above are timeless quests across the computer industry. What's 
different today are the following:

1. HTTP ubiquity for data access
2. Content Negotiation for data representation
3. URIs for abstracting fine-grained data access across protocols and 
data representation formats
4. EAV/SPO as a "deceptively simple" model for expressing data relations 
which is also serves as an across-the-wire data serialization format
5. Move towards intensional interaction with data (good old object 
theory) that builds on the more common extensional interaction with data 
(what you see in all "identity" challenged systems e.g., RDBMS realm) .

Concise narrative:

Pre. Web open data access was pursued primarily in the RDBMS realm via 
the likes of ODBC, JDBC, OLE-DB, various DBMS specific SQL CLIs etc..

The goal was all about decoupling application logic from backend DBMS 
servers.

The Web added the following dimensions to the issues above and in the 
process highlighted the limitations of conventional RDBMS technology:

1. Data Volume
2. Data Velocity
3. Data Heterogeneity
4. Data Source Disparity.

1-2 are typically referred to as "Big Data" these days. While (in the 
last 48 hours, courtesy of Jim Hendler) 1-4 have fallen under the banner 
"Broad Data).

To conclude, its the same old problem in new context -- separation of 
application logic from data access :-)

-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Received on Friday, 10 February 2012 12:38:39 UTC