Re: ShEx relation to SPIN/OWL

Peter

Although I agree with what you write in principle, there is an underlying
practical problem, which we face daily in the management of the LOV
repository.
Again, it's not a minor point that RDF uses URIs for classes and
properties. If you find a class or property URI somewhere in data (e.g.,
DBpedia) how do you figure the semantics of this URI, IOW how do you find
the document(s) you are speaking about which define it, and which ones will
you abide by, to figure in practice the closure of the description of this
URI?
Look at the result of this query  http://bit.ly/UPGA25  ran on current LOV
database of 450 vocabularies/ontologies. It yields all elements related to
foaf:Person in various namespaces (and so many documents). Somehow each of
those 300+ assertions is refining the semantics of foaf:Person. If you
ignore the 25 ones defined by the foaf: namespace itself, you are lefy with
more than twenty different namespaces/vocabularies/documents to explore,
and select or not to figure the closure.

Of course, looking only at the FOAF vocabulary using a HTTP GET on
foaf:Person, you will not access and even be aware of all this extra
semantics, and you can ignore it, apart of those which FOAF itself reuses.

This is an extreme case maybe, but it illustrates what the practical
problem will be, when using a URI, to first find out to which semantics you
abide by before passing it to reasoners or constraint checkers. One could
imagine something like a content negotiation on the URI where you pass in
parameters the type of description you want for this URI (give me SPIN,
OWL, Shapes ...). But given the messy way content negotiation is already
implemented after httpRange14 resolution (based also on our LOV
experience), I'm very skeptical about such a solution in practice.
Moreover, different parties will use different documents of which the
original URI owner does not control, is not aware of, let alone agrees with.

My concern is to see a door wide open to more balkanization of the Semantic
Web, each tribe abiding by its own, not necessarily public, interpretation
of shared vocabularies.


2014-07-31 22:55 GMT+02:00 Peter F. Patel-Schneider <pfpschneider@gmail.com>
:

> All these complaints about RDFS and OWL appear to be based on the
> conception that RDFS and OWL work with a single document containing
> everything that can ever be said about a particular vocabulary.
>
> However, this is a misconception.  It is certainly possible in RDFS and
> OWL to have multiple documents that speak to the same vocabulary.  This is
> done, for example, when OWL ontologies are extended.  A core document talks
> about the core vocabulary and other documents add new vocabulary and add
> new information about the core vocabulary as well.
>
> The exact same thing can (and should) be done with constraints.  You can
> have a document that provides the core information about a vocabulary.  You
> can have multiple documents that provide constraints on this vocabulary for
> various purposes.  When validating some information one can pick which set
> or sets of constraints to apply.
>
> This division can also be done with the axioms of the ontology.  There is
> no need for all the axioms of an ontology to be in the same document,
> leading to the possibility of having the RDFS portion be in one document
> and the non-RDFS portion in another document.  Other divisions are also
> possible.
>
> So stating that if you use RDFS and OWL you end up with a lot of baggage
> and lose on reusability is completely false.
>
> Peter F. Patel-Schneider
>
>

Received on Friday, 1 August 2014 10:16:23 UTC