W3C home > Mailing lists > Public > public-owl-dev@w3.org > January to March 2008

RE: [OWLWG-COMMENT] Re: Cardinality Restrictions and Punning

From: Pat Hayes <phayes@ihmc.us>
Date: Wed, 2 Jan 2008 10:35:36 -0800
Message-Id: <p06230910c3a1814a5e0c@[]>
To: "Michael Schneider" <schneid@fzi.de>
Cc: "Owl Dev" <public-owl-dev@w3.org>

>Happy new year, Pat!

And to you, Michael.

>Pat Hayes wrote on December 22, 2007:
>>I hereby officially shut down my semantic engine for 12 days.
>Ok, then let's turn it on again! :)
>You answered to me:
>>>I don't think that it is a useful idea to allow
>>OWL-DL-consistent ontologies
>>>to become inconsistent in OWL-Full.
>>I disagree. These are two distinct languages which differ profoundly
>>in their basic methodology and semantics, one more expressive than
>>the other and which has the less expressive language embedded into it
>>as a proper subset. In fact, under these conditions it is almost
>>inevitable that this will occur. Why would one not expect this?
>>Obvious contradictions in quantified logic, such as
>>(forall (x)(= x a))
>>(not (= b a))
>>are consistent in propositional logic.
>Hm, I would rather say that the first of these two formulas is simply a
>non-wellformed expression in propositional logic. And thus, the whole
>expression is non-wellformed.

Perhaps I expressed myself carelessly. I meant, 
are consistent in the propositional subset of 
quantified logic. That is, take the syntax of 
FOL, but restrict reasoning to the purely 
propositional inference rules, and the semantics 
to that of the propositional connectives, so that 
each quantified sentence is treated as an atomic 
proposition. This is the sense in which 
propositional logic is a sub-logic of FOL.

>So for the question about DL-consistent/Full-inconsistent OWL ontologies: I
>am only interested in examples, where the ontology is syntactically ok in
>OWL-DL, so that it may have a model, of course.
>>>And if this should not be preventable in
>>>general, one should at least take care that the cases for
>>which this happens
>>>reduce to artificially looking "research examples".
>>Again I disagree. You may be being spooked by the word
>>"inconsistent"; but as I am sure you know, this is simply another way
>>to say that the more powerful language is able to prove entailments
>>which are invisible to the less expressive language.
>I understand that inconsistent ontologies entail /everything/ (every RDF
>graph in the case of OWL-Full), since they entail some contradiction (ex
>falso quodlibet).

No no, I did not mean to refer to that, which I 
agree is not particularly interesting. I meant 
rather the observation that (A entails B) just 
when (A and not B) is inconsistent. So 
inconsistencies and entailments are rather 
closely connected: you can't have one without the 
other, so to speak.

>  So from a formal point of view, I receive more information
>from an inconsistent Full-ontology than from a consistent DL-ontology.

Of course. And I wouldn't say you get more 
information just because you get more 
entailments, for exactly this reason.

>>So, sometimes
>>there will be cases where a result is entailed in OWL-Full which is
>>not entailed in OWL-DL. I don't find this at all a bad thing, or
>>something to be avoided: on the contrary, this is often the chief
>>motivation for wanting to use a more expressive language.
>I perfectly agree that it is generally desirable to get more entailments
>from OWL-Full than from OWL-DL. But the scenario we are discussing here is
>not just about "more" entailments.

No, that is what I am discussing. But for every 
such case, there will be a corresponding case 
(less interesting, but it will exist) where a 
consistent ontology becomes inconsistent. To try 
to rule out those will also rule out useful 

>  Inconsistency gives me *all* entailments,
>which I do *not* regard to be desirable. An inconsistent ontology is
>effectively useless.

Being able to detect the inconsistency may be 
extremely useful. The cases we are considering, 
and need to distinguish, are in effect

1. This ontology is inconsistent (broken) and I 
can detect that (and then maybe do something 
about it)
2. I refuse to allow myself to recognize this 
kind of inconsistency, and will treat it as 
consistent, by interpreting it in a crazy way 
which makes no sense.

Of the two, I prefer the first.

>Whatever entailment I am querying for, the answer will
>always be "yes".
>So, from a practical point of view, I do *not* really learn more from an
>inconsistent OWL-Full ontology than from the same ontology being OWL-DL

What you learn is that it IS inconsistent.

>In effect, I do not learn anything from an inconsistent
>ontology. See below for a discussion of your example.
>>And I am
>>talking about real examples, not "research examples". Such as being
>>able to infer, from the fact that a taxonomy represented as a class
>>of classes contains only three members and that a thing is not in any
>>of them, that it is not classified by the taxonomy.
>I don't see how to express this example in OWL-DL.

I think you will be able to in 1.1, using 
punning: but you will not be able to draw the 
right conclusions. So I would prefer to say, that 
you will SEEM to be able to represent it in 1.1, 
but this is in fact an illusion.

>So, at least, it does not
>seem to be intended to be an example for the DL-consistent/Full-inconsistent
>problem. But let's discuss it from the Full-inconsistent perspective solely.
>If this ontology is inconsistent,

It is perfectly consistent. The inconsistent 
example would arise if you add to it the 
statement that the thing *is* classified by the 
ontology. That would be consistent in DL(1.1) but 
inconsistent in Full, if I follow the reasoning 
so far. But I confess I have not sat down and 
tried to write this kind of example out in detail 
to see what happens in 1.1. I mentioned it only 
to emphasize that things like this do happen in 
"real life" (whereas anyone who asserts that that 
the universe is finite deserves all the trouble 
they will get, IMO)

>  it definitly *will* entail this "thing is
>not classified by taxonomy" statement. But the ontology will then also
>entail the converse statement, i.e. that the thing *is* classified by the
>taxonomy. And this is certainly not something what one wants to receive as a
>If the above ontology is inconsistent, I would say that it is a fallacy to
>believe that one receives the desired entailment from the premises you
>mentioned above (taxonomy with three classes, none of them containing the
>regarded thing). Instead, one receives this entailment simply from the state
>of inconsistency (or more precisely, from the entailed contradiction) of
>that ontology. One would receive the same entailment from *any* other
>inconsistent ontology, too, regardless what other "facts" there are in such
>an ontology. So one really does not get any valuable information from an
>inconsistent ontology.
>This is the reason, why I would regard it to be a real problem, when OWL-1.1
>would get a Full version, in which it could happen *too easily* that an RDF
>graph is DL-consistent, but Full-inconsistent.

Well, even if this were true - and I don't accept 
that it is, as all the example Ive seen have been 
rather artificial - what exactly would it mean? 
It seems to me that it would mean that the DL 
subset was simply too weak to be able to detect 
many inconsistencies. (As it is based on punning, 
which is itself almost a generator of 
inconsistency all by itself, this is hardly 
surprising.) But to me, it seems that to 
denigrate a Full semantics for being too prone to 
label inconsistencies, is misplacing the blame. 
Surely the real problem here is that the DL 
semantics is too weak, and unable to detect many 
real inconsistencies which arise. It is much 
better to detect them than to pretend they aren't 

>For OWL-1.0, I pondered a
>while now about an example. But the only method, I am currently aware of, to
>produce such an example, is the "finite-universe" trick, as originally
>brought to my attention by Peter [1].
>The existence of this method for OWL-1.0 shows me that it would, of course,
>not be reasonable to demand that the DL-consistent/Full-inconsistent problem
>must not occur in OWL-1.1. On the other hand, if playing with the
>cardinality of the OWL universe is the only way to evoke this problem in
>OWL-1.0 [FIXME!], then I am not very scared from a practical point of view.
>Because the only usecase for this I know is to get a notion of "closed
>world" with OWL [2]. And I regard this to be more of a niche application.
>Anyway, it is kind of abuse of OWL, since the open world assumption is
>fundamental to OWL.


>So, to conclude, I do not regard the "finite-universe" trick to be a show
>stopper. But if you would show me examples of simple and natural looking
>OWL-1.0 ontologies, which are DL-consistent but Full-inconsistent, then I
>would probably have to reconsider my opinion.

I can't do that because in 1.0, the DL syntax 
boundaries were drawn with exquisite care to 
prevent this happening. I doubt that this trick 
(which was almost entirely Peter's, by the way: I 
can recall the moment when he said it would be 
possible, with a kind of faraway look in his 
eyes, and I privately wished him luck; and was 
most impressed, later, when he did it) can ever 
be done when those syntax boundaries have been 
loosened to allow punning.

>For OWL-1.1, I do not yet know whether the new features like QCRs, sub
>property chains, or additional property characteristics will bring such
>examples nearer to us. Well, I would certainly want to live with this
>situation then, because these new features are much too important to be
>dropped w.r.t. this OWL-Full related problem. But at least for data/object
>property punning

Yes, well, I think this is a terrible idea, as 
Ive already said. I was against the very 
distinction between 'object' and 'data' in the 
first place, as it has absolutely no semantic 
justification: but to try to both have it and at 
the same time pretend not to have it, using 
punning, is a recipe for complete semantic 
confusion. These examples are more about that 
confusion that about a DL/Full argument, IMO. Its 
just that the confusion is more apparent in the 
Full world because you can say more stuff.

>, I can already see from Jeremy's example [3] that the
>DL-consistent/Full-inconsistent problem can easily arrive even for very
>simple and natural looking ontologies. And in this case, my opinion is that
>the tradeoff should rather be in favour of OWL-1.1-Full instead of property
>punning. (But, of course, I already know that you will agree with me in this
>very single point :)).

Indeed. Let us keep a united front there, at least :-)


>Dipl.-Inform. Michael Schneider
>FZI Forschungszentrum Informatik Karlsruhe
>Abtl. Information Process Engineering (IPE)
>Tel  : +49-721-9654-726
>Fax  : +49-721-9654-727
>Email: Michael.Schneider@fzi.de
>Web  : http://www.fzi.de/ipe/eng/mitarbeiter.php?id=555
>FZI Forschungszentrum Informatik an der Universität Karlsruhe
>Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe
>Tel.: +49-721-9654-0, Fax: +49-721-9654-959
>Stiftung des bürgerlichen Rechts
>Az: 14-0563.1 Regierungspräsidium Karlsruhe
>Vorstand: Rüdiger Dillmann, Michael Flor, Jivka Ovtcharova, Rudi Studer
>Vorsitzender des Kuratoriums: Ministerialdirigent Günther Leßnerkraus

IHMC		(850)434 8903 or (650)494 3973   home
40 South Alcaniz St.	(850)202 4416   office
Pensacola			(850)202 4440   fax
FL 32502			(850)291 0667    cell
phayesAT-SIGNihmc.us       http://www.ihmc.us/users/phayes
Received on Wednesday, 2 January 2008 18:35:51 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:58:16 UTC