- From: Kingsley Idehen <kidehen@openlinksw.com>
- Date: Fri, 27 Feb 2015 18:19:41 -0500
- To: public-lod@w3.org
- Message-ID: <54F0FB8D.5060303@openlinksw.com>
On 2/27/15 4:19 PM, Paul Tyson wrote: > I don't know that many "linked data" systems improve much on > conventional ones. To that answer, I ask: what is the problem? As I see it, the problem boils down to data access, integration, and dissemination. This has to happen on time, in the right form, and delivered to the relevant entity. All the "conventional systems" I am aware of suffer from a common flaw, one that I refer to as data-silo-fication. This problem ranges from big iron to tiny computing devices. What are the ramifications of the data-silo-fication? Degradation of the following: 1. Agility 2. Privacy 3. Society. > We have enough of data and linking. How come? The world is full of data silos that hide behind silky promises of convenience and/or outright cognitive dissonance. > What we need to > link are the artifacts of mental processes, which are not so easily > reduced to "data". Yes! And you already achieve that in the so-called "real world" using natural language sentences, one of mankind's most powerful inventions [1]. We have used language to encode and decode information for eons. > That is the real promise of these technologies, but > is not, so far as I am aware, being pursued anywhere in public. Well, we have this wonderful thing called the World Wide Web. Its basic architecture boils down to: 1. Using HTTP URIs as names for entity types (or classes) -- the *nature* of entity relationship participants 2. Using HTTP URIs as names for predicates (sentence forming relations) -- the *nature* of entity relationship types (functional, inverse-functional, transitive, symmetrical etc..) 3. Using HTTP URIs as names for instances of entity types -- actual entity relationship participants. In addition to the above, albeit not immediately obvious in HTML (which has had <link/> and the <a/> control in place forever) , the aforementioned architecture also included the ability to construct sentences where the subject, predicate, and object (optionally) are identified by HTTP URIs. The digital sentences described above can be written to documents using a variety of notations (HTML, XML, CSV, TURTLE, JSON, JSON-LD etc..), and served from any location on an HTTP network, with name interpretation (description lookup or de-reference) baked in. The semantics of the predicates that hold these sentences together are both machine and human comprehensible. Thus, anyone can lookup the *nature* of an entity relationship type en route to understanding the meaning of a given entity relationship. What more do you need? In my experience I see a big problem that boils down to understanding that "over automation" is bad. Recent fixation with imperative programming and applications, in every situation, is utterly broken. The assumption that end-users are dumb, stupid, even lazy is the eternal blind spot that afflicts those that simply cannot see, or even describe a digital computing realm without fixating on a specific programming language, framework, library, data serialization format, or some new dogma (e.g., Open Source -- which doesn't guarantee data de-silo-fication ). In my eyes, what we need is the ability to put language to use, in the medium that the Web provides. That simply boils down to systematic use of signs [HTTP URIs], syntax [S,P,O or E,A,V term arrangement rules], and semantics [meaning of subject, predicate, and object relationship roles] for encoding and decoding information [data in some context]. Basically, an ability to write the aforementioned digital sentences to HTTP network accessible documents, using a variety notations. BTW -- if you know of some existing alternative to what I've described, that doesn't include a hidden data-silo-fication tax, I am all ears :) > > Note, however, the recent release of Linked Data Platform as a W3C > standard (http://www.w3.org/TR/2015/REC-ldp-20150226/). No doubt this > will be useful in its own right, but also point the way to future > opportunities by what it*doesn't* cover. LDP simply addresses read-write issues for those that can't make use of SPARQL 1.1 Update, SPARQL Graph Protocol, or WebDAV (which is XML specific in regards to metadata). Links: [1] http://www.slideshare.net/kidehen/understanding-29894555/55 -- Natural Language & Data [2] https://www.pinterest.com/pin/389561436488854582/ -- What sites between you and your data [3] http://kidehen.blogspot.com/2014/07/nanotation.html -- Nanotation (inserting digital sentences wherever plain text is allowed) [4] http://kidehen.blogspot.com/2014/08/linked-local-data-lld-and-linked-open.html -- Linked Local Data vs Linked Open Data . -- Regards, Kingsley Idehen Founder & CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
Attachments
- application/pkcs7-signature attachment: S/MIME Cryptographic Signature
Received on Friday, 27 February 2015 23:20:07 UTC