Re: Publications about OWL (1 or 2) Full

Hi!

On Thu, 19 May 2011 Markus Krötzsch wrote:

 >> What Markus says here I guess is that, in spite of the limitations of
 >> the punning mechanism, a full-fledged OWL 2 DL reasoners will likely
 >> infer more things than *currently existing* incomplete OWL Full
 >> reasoners.
 >
 > Right.

Not right! See my yesterday's mail:

     <http://lists.w3.org/Archives/Public/semantic-web/2011May/0189.html>

Even many of the most "light-weight" RDF entailment-rule reasoners will 
give you at least some of the metamodeling-related results that you 
cannot get from any conformant OWL 2 DL reasoner, provided that these 
rule reasoners support rdf:type together with owl:sameAs-based 
substitution of nodes in a graph. Just play around with an arbitrary RDF 
triple store that provides for some basic inferencing. And the more 
expressive such "RDF reasoners" get, the further they go beyond OWL 2 DL 
concerning metamodeling-based results, while still keeping massively 
incomplete w.r.t. OWL 2 Full: Jena's OWL reasoners, OWLIM... there are 
many around.

 > We know that there cannot be a tool that computes all
 > consequences of OWL with "proper" meta modelling,

No, that's *not* what we know! Taking the term "consequence" to mean the 
same as "logical entailment", and taking "OWL with proper meta modeling" 
to mean "OWL Full", then all we know is that there cannot be a tool that 
computes all consequences AND all *non*-consequences of OWL Full. What 
you claim here is basically that OWL Full isn't even *semi*-decidable, 
which has never been proven by anyone. In fact, the whole semantics 
specification of OWL (1/2) Full, i.e. the set of model-theoretic 
semantic conditions that build the semantics of OWL Full (the OWL 2 
RDF-Based Semantics), is defined as a set of standard first-order 
formulae: OWL 2 Full is, essentially, defined by first-order theory! So 
it should be clear (at least to those people knowing about logics) that 
OWL (1/2) Full *is* semi-decidable. And, of course, this means that 
there *can* be tools that compute all consequences of OWL Full. There 
will just be no complete tools for computing all *non*-entailments.

 > and we also
 > know that some forms of meta modelling can even lead to
 > intricate inconsistencies that make the whole ontology
 > language paradoxical (PF Patel-Schneider's paper "Building
 > the Semantic Web Tower from RDF Straw" alludes to this
 > issue).

We *knew* this for OWL 1 Full. The results in the cited paper 
fundamentally depend on so called "comprehension conditions", which were 
part of the semantics of OWL 1 Full, but are not normative in OWL 2 Full 
anymore. A different approach called "Balancing" has taken the place of 
the comprehension conditions, and so the issue has gone and the paper 
(at least the argument you refer to) is moot.

 > So it seems that a tool that obtains all consequences
 > of plain OWL constructs, and that can still handle some
 > meta modelling is not such a bad choice, even if it is
 > called "OWL DL reasoner" ;-)

To me it seems, that a tool that obtains all consequences of plain OWL 
constructs, and that can still handle *all* meta modelling, while 
occasionally not coming back when processing non-consequences, might be 
even a better choice. And that's what we can actually expect from an 
"OWL Full reasoner" (when using "Balancing"). Btw, being semi-decidable 
(or recursively enumerable) is sufficient to produce complete answers 
under the SPARQL 1.1 RDF-Based entailment regime (because all we need is 
complete enumeration of answers), and this is pretty much what I 
consider to be sufficient to me when I am about doing reasoning on the 
Web. Complete decision (where the stress is on *non*-entailment 
detection) is certainly a plus in some scenarios, but my main interest 
in practice is on getting inferences, not so much on learning what is 
not an inference (except when I am doing analysis work).

But even in scenarios where the detection of non-entailments or 
consistent ontologies is of real relevance, keep in mind that 
undecidability does not mean that I will always get no result; it just 
means that, theoretically, there *exists* some input for which my 
reasoner won't come back. Whether this has any relevance for practical 
reasoning is completely unclear from the pure fact that a language is 
undecidable. And in practice, there will always be a maximal a-priory 
reasoning time to be granted to the reasoner, so whether there is no 
result from undecidability or for whatever reason is pretty irrelevant 
(and there are tons of other reasons for a reasoner to not come back in 
time). In any case, one has to check *experimentally* for one's specific 
application if it works or not, regardless whether one uses decidable or 
undecidable reasoning. And I can tell you that successful decision under 
undecidable entailment regimes is not so uncommon... I know, because I 
*have* checked!

Cheers,
Michael

PS: I just found that my spell-checker did not know the term 
"undecidability". That lucky one!

-- 
Dipl.-Inform. Michael Schneider
Research Scientist, Information Process Engineering (IPE)
Tel  : +49-721-9654-726
Fax  : +49-721-9654-727
Email: michael.schneider@fzi.de
WWW  : http://www.fzi.de/michael.schneider
==============================================================================
FZI Forschungszentrum Informatik an der Universität Karlsruhe
Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
Tel.: +49-721-9654-0, Fax: +49-721-9654-959
Stiftung des bürgerlichen Rechts
Stiftung Az: 14-0563.1 Regierungspräsidium Karlsruhe
Vorstand: Dipl. Wi.-Ing. Michael Flor, Prof. Dr. rer. nat. Ralf Reussner,
Prof. Dr. rer. nat. Dr. h.c. Wolffried Stucky, Prof. Dr. rer. nat. Rudi 
Studer
Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus
==============================================================================

Received on Wednesday, 25 May 2011 11:48:00 UTC