Re: Equality and subclass axioms

Jeff Heflin wrote:
>Ian Horrocks wrote:
> >
> > You didn't "negate" my axiom (you can never do that), you just added some
> > additional information (an additional constraint). Assuming it is true
> > that no model can allow triangles that are both three and four-sided, then
> > this is an example of the kind of "over-constraining" that I mentioned in
> > my email: our ontology now constrains allowable models to the extent that
> > none can ever contain an instance of triangle (i.e., we can infer that
> > triangle is equivalent to the class "Nothing"). If we use a reasoner to
> > check the ontology generated by our crawler, then it will detect this
> > fact, and can alert an intelligent (possibly human) agent to the fact that
> > there may be a problem with the axioms relating to triangle.
> >
>
>But how can a system know when a particular definition is
>"over-constrained" and when an equivalence to "Nothing" is actually
>intended? Is a human going have to step in every time "Nothing" is
>defined and say, "Yes, I really meant 'Nothing'?" I hope not, because I
>can see ontology integration as a frequent occurence. I think that
>semantic search engines will need to be able to integrate ontologies on
>the fly to meet the needs/context of each query issued by a user. I
>don't believe you can have a single integrated ontology that works for
>all queries.

There may be a problem of nomenclature here. "Over-constrained" in 
this sense just means "inconsistent". In a sense Ian is right, that 
(monotonic) logic only allows one to add information, so that it is 
impossible to "negate" an assertion with another, if this means 
something like 'erase' or 'nullify'. But this is slightly 
disingenuous, since it IS possible to contradict one assertion with 
another. If A asserts P and B asserts not-P, then we usually would 
say that they disagree, or are contradicting each other. Translated 
into Horrocks-talk, this means that the conjunction of their 
assertions (P and not-P) is so over-constrained that there is no 
possible way to interpret it as describing a state of affairs, ie 
what A says about the world cannot be reconciled with - contradicts - 
what B says about it.

Jeff asks several questions which all have different answers.

1.  How can we know when a set of definitions is "overconstrained"?
Ans: that is precisely what a complete inference engine should be 
able to determine, ie be able to detect inconsistency.

2. How can we know whether this result was *intended*?
Ans: I'm not sure this question is even meaningful when we are 
talking about putting together assertions from different sources (who 
could possibly have intended this, if A and B didn't even know about 
each other?) but in any case this question goes beyond logical 
semantics. So the answer is, we cannot, at least not without some 
extra information about intentions.

3. Is a human being going to have to step in?
Ans: I see no reason why. In the case where the assertions all come 
from one source, it is presumably the responsibility of whoever 
asserted them to make sure they are consistent. In the case where 
they come from several sources, I doubt if a human being would be 
able to help in any case.

4. I think that semantic search engines will need to be able to 
integrate ontologies on the fly...
Ans: Well, I agree this would be desirable, but if that 'integration' 
is going to involve finding a logically coherent consensus among 
potentially disagreeing agents, then  God alone knows how to do it, 
and even He might have trouble.

Pat Hayes

---------------------------------------------------------------------
IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
phayes@ai.uwf.edu 
http://www.coginst.uwf.edu/~phayes

Received on Monday, 27 November 2000 14:33:08 UTC