Re: relevant paper on using OWL (description logics) for RDF validation

On 10/26/2014 10:03, Peter F. Patel-Schneider wrote:
> In particular, the results of SPIN rules should participate in 
> whatever other inferencing is going on.

In TopBraid, for example, we have an architecture in which multiple 
inferencing engines can be chained together. This allows users to run 
OWL and then SPIN rules (possibly looped). Once you have the inferred 
triples, you can press the constraint checking button on the resulting 
graph.

>
>> If someone wants to run
>> constraint checks over an OWL model then it needs to run the SPARQL 
>> processor
>> over a graph that has the additional inferences visible.
>
> Well, the situation is not nearly so simple as this.  The additional 
> inferences may be infinite, for example.

I think you describe the same problem in your paper and state (on page 6)

"Fortunately, it is relatively easy to recover from this problem.
All that is needed is to add all the RDF (or RDFS)
consequences to the graph. Yes, there are an infinite number
of these consequences, but our formal development does not
care whether the graph is finite or infinite."

Some RDF/OWL APIs such as Jena have graph implementations that perform 
inferences on the fly without ever asserting the new triples. A SPARQL 
engine doesn't know or care where those triples are coming from.

> Maybe SPIN doesn't need the information separated out.  However, then 
> inference can produce new spin:constraint relationships.

If the inferencing engine is implemented properly then the new rdf:type 
or spin:constraint triples would already have been inferred (e.g. via 
backward chaining) when the system figures out which checks to run. So I 
don't see how this scenario could be a problem. Inferring new 
spin:constraint triples feels like a corner case anyway and could easily 
be marked as unsupported, just like OWL disallows manipulations to its 
system vocabulary.

Holger

Received on Sunday, 26 October 2014 03:31:48 UTC