Weakness of RDF? (was: Tuple Store, Artificial Science, Cognitive Science and RDF (Re: What is a Knowledge Graph? CORRECTION))

On 26/06/2019 07:58, Patrick J Hayes wrote:
> [...] For AI purposes, RDF is absurdly weak and
> inexpressive. But AI is not what it is trying to do.
>

I'm reminded of a most interesting poster by yourself and Peter Patel-Schneider, 
presented at ISWC 2013.  One of the results, as I recall, was that RDF semantics 
is so weak that any RDF expression can be satisfied by an interpretation with no 
more than 3 members in its domain of discourse (subject to absence of certain 
semantic extensions such as some of those in OWL).

It was only afterwards that it occurred to me:  this isn't a bug, it's a feature!

As I see it, one of the key consequences of the RDF semantics is:

Merging lemma. The merge of a set S of RDF graphs is entailed by S, and entails 
every member of S.
-- https://www.w3.org/TR/2002/WD-rdf-mt-20020429/#entail

(Which I don't see mentioned in the more recent RDF semantics spec, but I assume 
it still holds.)

My take is that this is what validates combining (i.e. merging) RDF from 
independent sources, which I see as one of the key advantages of RDF compared 
with popular data models that don't have an associated formal semantics - we 
have a rule for combining data that comes with an (admittedly weak) semantic 
guarantee.

Yet, the weakness of these semantics suggests to me that the formal semantics is 
making a minimum of assumptions about how the RDF is being used, hence less 
likely to "get in the way" of desired application semantics.

#g
--

Received on Wednesday, 26 June 2019 08:40:46 UTC