Re: Equality and subclass axioms

On November 28, pat hayes writes:
> > > > >Ian Horrocks wrote:
> > > > > You didn't "negate" my axiom (you can never do that), you 
> >just added some
> > > > > additional information (an additional constraint). Assuming it is true
> > > > > that no model can allow triangles that are both three and 
> >four-sided, then
> > > > > this is an example of the kind of "over-constraining" that I 
> >mentioned in
> > > > > my email: our ontology now constrains allowable models to the 
> >extent that
> > > > > none can ever contain an instance of triangle (i.e., we can infer that
> > > > > triangle is equivalent to the class "Nothing"). If we use a reasoner to
> > > > > check the ontology generated by our crawler, then it will detect this
> > > > > fact, and can alert an intelligent (possibly human) agent to 
> >the fact that
> > > > > there may be a problem with the axioms relating to triangle.
> > > > >
> > > >Jeff Heflin wrote:
> > > >But how can a system know when a particular definition is
> > > >"over-constrained" and when an equivalence to "Nothing" is actually
> > > >intended? Is a human going have to step in every time "Nothing" is
> > > >defined and say, "Yes, I really meant 'Nothing'?" I hope not, because I
> > > >can see ontology integration as a frequent occurence. I think that
> > > >semantic search engines will need to be able to integrate ontologies on
> > > >the fly to meet the needs/context of each query issued by a user. I
> > > >don't believe you can have a single integrated ontology that works for
> > > >all queries.
> > > Pat Hayes wrote:
> > > There may be a problem of nomenclature here. "Over-constrained" in
> > > this sense just means "inconsistent". In a sense Ian is right, that
> > > (monotonic) logic only allows one to add information, so that it is
> > > impossible to "negate" an assertion with another, if this means
> > > something like 'erase' or 'nullify'. But this is slightly
> > > disingenuous, since it IS possible to contradict one assertion with
> > > another. If A asserts P and B asserts not-P, then we usually would
> > > say that they disagree, or are contradicting each other. Translated
> > > into Horrocks-talk, this means that the conjunction of their
> > > assertions (P and not-P) is so over-constrained that there is no
> > > possible way to interpret it as describing a state of affairs, ie
> > > what A says about the world cannot be reconciled with - contradicts -
> > what B says about it.
> 
> >Ian Horrocks wrote:
> >I was trying to make a serious point, not to engage in disingenuous
> >double-talk. In the triangle example, what A and B assert is not P and
> >not-P, but "X <-> P and X <-> Q, where P -> not-Q. From this we can infer
> >that there is no such thing as an X (or a P, or a Q), just because this is
> >the only state of affairs in which both assertions hold. In some
> >circumstances (like our triangle example) the inference may be trivial,
> >and/or may conflict with our intuition; in this case we may want to
> >conclude that A and B "disagree", and that the ontology is "incorrect". In
> >other circumstances the inference may be non-trivial and/or consistent
> >with our intuition; in this case we may want to conclude that both A and B
> >were "correct", and that by combining their knowledge we have discovered
> >some new and useful fact.
> 
> I didn't mean to imply that you werent being serious or were engaging 
> in double-talk; please forgive any unintended offense. My point was 
> only that 'overconstrained' in the sense being used here just means 
> 'inconsistent'. And I agree that detecting an inconsistency is a 
> useful process and does not indicate a problem of some kind. There is 
> no real difference between making inferences and detecting 
> inconsistency. Every sentence makes a claim about what the world can 
> possibly be like; drawing conclusions is the process of ruling out 
> some states of affairs as impossible, and detecting a contradiction 
> is discovering that there are no possible states of affairs left.

We are more or less in complete agreement here, so we should probably
give up before we find something we can really argue about. My use of
"constraining" was meant to indicate exactly what you call "ruling out
some states of affairs" - I meant constraining the set of possible
models of the world.

> BTW, in your example, I think it is really not at all clear what one 
> 'should' infer. If one accepts both of A and B as reliable sources of 
> truth, then one would be justified in concluding that there were no 
> X's, indeed. But speaking purely pragmatically, that would seem to me 
> to be an unlikely conclusion. Why would a sane agent make an explicit 
> assertion about things which do not exist? It seems more likely, in 
> this case, that A and B in fact disagree about the nature of X's, and 
> hold rival opinions. Logic is purely neutral on this point; it only 
> tells us that something has got to give, as it were. We can't believe 
> in X's and also believe both A and B; but which way to resolve this 
> matter is up to us to decide. (For example, it might be instructive 
> to check whether A and B, seperately, would agree with the conclusion 
> that there are no X's.)

I never said I was clear on what the inference "means" in any absolute
sense, or on what to do about it - that is the problem of the
person/agent who chooses to accept both A and B as "reliable sources
of truth".

As for the case where the inconsistency in X could be "correct", I was
thinking more of querying, where a geometrically challenged user my
ask for information about triangles that are both three and four
sided. This may sound like a joke, but when ontologies become
sufficiently large and rich it is quite possible even for
sophisticated users to form query classes that are logically
inconsistent. Being able to inform them that this is the case is
useful both from the user's point of view (they can try to reformulate
the query) and from the system's point of view (no need to bother
searching).

Ian

Received on Tuesday, 28 November 2000 11:58:50 UTC