Re: ISSUE: Classes as instances [old]

Chris, I just discovered this old thread on the Webont record, I 
missed it at the time. Basically I agree with you, but for one small 
but important point (which is currently relevant, so here goes):

>I have written extensively on issues in which the classes as instances
>problem comes up (e.g. [1] and [2]), and also about cases where the "isa"
>relation is misused and another transitive relation is intended (e.g. [3]
>and [4]).  The fact that the latter happens does not remove the need for
>the former, however.
>
>There are a number of well understood canonical examples of when the only
>correct solution is to allow second order predicates.

NO, they aren't second order. Or at any rate, its not necessary for 
them to be second order. This is a common mistake, but its really 
centrally important. There is nothing non-first-order about treating 
a predicate as an object. Think about it: if there were, then it 
would be *logically impossible* to have a first-order axiomatization 
of set theory. What makes a logic second-order is having a logical 
(semantic) requirement on the *universe* of predicates, ie a special 
condition on the interpretation of second-order *quantifiers*. But 
one can talk about predicates in a first-order language perfectly 
well. Henkin, years ago, showed that one can give a first-order 
semantics to the entire vocabulary of type theory, with quantifiers 
all the way up to omega. Perfectly first-order, though: complete, 
compact, satisfies the Skolem-Lowenheim, instantiation can be handled 
by a unification algorithm, you name it.

>The "animal/species" example is one.  There is no way to model this
>properly in first order.

There is no way to model it without allowing predicates to apply to 
predicates, indeed. But you don't have to use second-order logic to 
do that, is the key point. You CAN do it in first order. You can do 
it directly in CL or KIF or Lbase, all of which are first order, and 
you can do it in any first-order syntax if you are willing to use 
things like 'holds'.

>In the library domain there a numerous taxonomies, such as the subject and
>genre, which can't be combined (in one system) in first order.
>
>The wordnet and thesauri examples are classic meta-modeling problems, in
>which we have a description of a class of entities whose instances form a
>taxonomy.
>
>In all of these cases, you can invent modeling "hacks" by attempting to
>collapse the is-a relation into a special purpose relation in first order,
>however  by doing so you lose much of the inferential power of the
>language.  This is especially true in description logic like languages
>which FEATURE inference based on the is-a relation.

I agree, the hacks are horrible. But these hacks are only needed 
because of the artificial *syntactic* restrictions usually imposed on 
FOL syntax; and you can have FOL without those restrictions. RDF is 
blessedly free of them, and that is precisely the CL/Lbase idea. In 
common logic syntax you can treat is-a as a relation which is also an 
object, since all relations are objects. The treatment of rdf:type 
(which is 'is-a' in RDFS) in Lbase illustrates this:

rdf:type(?x, ?y)  implies ?y(?x)
(  ?y(?x) and rdfs:Class(?y)  ) implies rdf:type(?x,?y)

>To argue that there is no need for this is rather silly, really.  I'm
>surprised to see Ian and Peter doing so.  On the other hand, to argue that
>there is a severe computational expense to allowing it IS, in my opinion,
>quite serious.

Its a serious claim and has often been claimed, but I havnt seen the 
case made in detail. It seems to be a chorus of "But that would break 
my code, waaa", rather than any kind of actual detailed argument. 
There is a easy (trivial) adaptation of the standard unifier which 
covers the extended syntax and yields a complete inference process 
for CL syntax from any complete 'classical' FO reasoner. Now, this 
would admittedly give some optimizers trouble in some cases; but it 
wouldn't, as far as I know, break any existing optimizers on any 
*existing* use case. In other words, all current reasoners would, 
after a small code tweak to the unifier, work unchanged on all 
current FO syntax, while also being complete (if less efficient) on 
the extended syntax. But to make this comparison properly, one needs 
to compare the extended syntax cases with what a current FO reasoner 
would do with their hacked-around conventional-syntax translations: 
and if you use things like 'holds' you will break all the optimizing 
code (which relies on recognizing the relational constants) in any 
case. So there is no net loss, seems to me; but a very large increase 
in expressivity and usefulness.

Also, by the way, there are other code options which allow the 
optimizations to apply more generally. For example, SNARK attaches 
breakout flags to relation names to trigger special-purpose 
reasoners, eg if the system figures out that R is transitive, it has 
special transitive-property code to handle that case, which is called 
by the unifier when it finds the 'transitive' flag on the name R. 
Having terms or variables denote properties will break this code on 
the extended syntax, because you can't attach flags to variables. But 
here's an option, which is better structured in any case: rather than 
flag the relation name, add a special assertion about the relation, 
like 'transitive(R)', using one of a small set of special 'meta' 
properties with code breakouts associated with them. Keep them in a 
hash-coded sub-KB if you like. Now, check each relational *term* 
during unification to see if there is a subsuming special assertion, 
and use that to trigger any special reasoner. Its a small (near 
constant) extra cost but it allows the optimizations to apply to any 
relational syntax, including things like relational variables or even 
functional terms denoting relations.

Ive had detailed discussions with several implementers about this and 
still havn't been convinced that there are any real problems with the 
extended syntax. It is particularly frustrating to see this constant 
reiteration of the idea that anything that goes a tiny step beyond a 
basic undergraduate textbook version of logic is a disastrous 
paradigm shift which will break all known reasoning engines, without 
the case ever actually being made. 
http://www.cs.man.ac.uk/~horrocks/Publications/download/2003/HorrocksPatelSchneider.pdf 
for example reiterates the 'breaks all optimizers' claim several 
times, without giving a shred of evidence or even analysis.

>So, I propose skipping the "well, I think the right way to
>model this is x" arguments, and move on to, "This is the price of allowing
>classes to be treated as instances in the language".

Yes, I'd like to see some actual details as well, to get us beyond 
this tiresome reiteration of this all being so 'non-standard'.

Pat
-- 
---------------------------------------------------------------------
IHMC					(850)434 8903 or (650)494 3973   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola              			(850)202 4440   fax
FL 32501           				(850)291 0667    cell
phayes@ai.uwf.edu	          http://www.coginst.uwf.edu/~phayes
s.pam@ai.uwf.edu   for spam

Received on Thursday, 20 February 2003 17:07:37 UTC