W3C home > Mailing lists > Public > www-rdf-rules@w3.org > September 2001

RE: Rules are *not* Horn clauses!

From: Pat Hayes <phayes@ai.uwf.edu>
Date: Wed, 19 Sep 2001 18:38:00 -0500
Message-Id: <p05101023b7ced2c1207f@[]>
To: "Wagner, G.R." <G.R.Wagner@tm.tue.nl>
Cc: www-rdf-rules@w3.org
>  > >Please notice that Prolog rules (as well as any other practical
>>  >computational rule concept such as SQL views and OCL implications)
>>  >are *not* Horn clauses! They are, for good reasons, more expressive,
>>  >allowing for negation in their body/antecedent (and other things,
>>  >such as quantifiers, as in the case of SQL views).
>>  Maybe Prolog has morphed into something else since the last time I
>>  looked at it. Can you characterize what subset of logic *is* captured
>>  by Prolog? I'm not sure what you mean by 'allowing for negation'.
>You probably know that Prolog, like SQL, applies a negation (in
>rule bodies only) under the closed-world assumption, often called
>negation-as-failure. A negation operator is essential for
>expressing queries that arise in practical applications. For
>being practically (and not just academically) relevant, rules
>for the SemanticWeb need negation!

OK, you mean negation by failure. I thought maybe you did. This is a 
controversial issue in the semantic web context. I actually strongly 
disagree: I think that negation by failure, like any other 
non-monotonic construction, is not a viable mechanism for the 
semantic web; far from being needed, I think that it will be like a 
poison or a cancer; its very presence will make the semantic web 

Those are strong words, so let me be particularly clear. The 
mechanisms often characterized as negation-by-failure are useful and 
will continue to be useful; the fact that something is not found in 
some database is often very important and useful information. But 
that in itself is not *negation* by failure; it might be called 
absence-by-failure. The inference from (nothing in this KB matches A) 
to (A is false) is the non-monotonic step that I think must be 
avoided, or at any rate somehow controlled and isolated, in order to 
make the semantic web useable. For example, it may be necessary to 
understand all such 'negations' as containing an indication of what 
KB they are relative to, so that some person or process other than 
the reasoner that came to the negative conclusion can make an 
independent judgement about whether that conclusion should be 
considered acceptable in a wider context; or, it may be that this 
kind of inference is kosher provided one also has some kind of 
warranty about the KB in question (that it is in some sense 
'complete'), although such guarantees are notoriously difficult to 
justify in most realistic settings, and often very hard to state 

>The model-theoretic semantics for logic programming rules with
>negation (also called "normal logic programs") is given by
>a certain definition of intended models (called "stable models").
>There is an efficient inference operation, called "well-founded
>semantics" that is sound, but not complete, wrt stable models.

Yes, I know. All of these notions refer to a given set of clauses, 
however, relative to which the stability is understood. And the 
general case for the semantic web will always be one where the 
'global' set of knowledge is simply unknown, and in any case is 
liable to change as a result of circumstances beyond the reasoner's 
control, or even knowledge. Logic programming refers intrinsically to 
a closed world, and the web is an intrinsically open world. They are 
fundamentally unsuited to one another. Logic programming is an 
essentially private matter; it's proofs are run-time tracings of a 
processor, and its results are bindings (ie models, in effect); but 
web reasoning has to be carried out, and its proofs communicated (or 
at any rate communicable when required) in  an essentially public 
manner, so that they can be independently checked and evaluated. 
Think of a semantic-web proof as something that can be published with 
a URL, and once published, can be checked for validity by any 
processor given only that URL. The KB might not even exist, but the 
proof must still be checkable.

>  > Ah, yes; SQL isnt what I would call a logical query language at all.
>But one may view SQL queries (without aggregates) as FOL formulas.
>E.g., "SELECT x,y,z FROM p WHERE x=y" corresponds to the formula
>"p(x,x,z)" which is answered by returning all bindings to x and z
>such that the corresponding instantiations are entailed by the
>current database state.

Thanks, that is a view of SQL with which I was not familiar. Is there 
is any kind of guarantee that an SQL processor will be able to 
perform all the inferences that this translation would suggest? (I 
ought to stop saying things about SQL in public until I do more 
reading, obviously.)

>  > >We should consider the concept of a rule as a basic one that
>>  >is not derived from that of a Horn clause (which happens to be
>>  >a very restricted/simplified type of rule). The concept of a
>>  >Horn clause is tight to classical (two-valued) logic. But rules
>>  >play a role in all kinds of formalism with a non-classical
>>  >logic semantics (such as intuitionistic, partial, temporal,
>>  >inconsistency-tolerant logic, etc.).
>>  Hold on a minute there. There is no single notion of 'rule' that
>>  plays a role in a variety of logics.
>Why not?

Because they have different syntax, for a start. A rule that applies 
to, say, temporal logic will contain expressions that aren't even 
well-formed to a parser for, say, classical nontemporal logic.

>For each logic (even if it does not have an implication)
>we can define the notion of a sequent F => G, where both F and G
>may be composite and have free variables, and using the logic's
>model theory, we can define when such a sequent is satisfied by
>a model. This seems to provide a generic concept of a rule.

Well, maybe (modulo the changes in syntax already noted); but if a 
rule is like a sequent, then it cannot also be like a clause. I agree 
that sequents are a more powerful and more general tool. But notice 
now that these 'rules' have no preferred direction of processing, and 
support complete symmetry between 'assertions' and 'queries' (ie 
sequents with empty left and right sides), so the use of 'rule' 
terminology seems only marginally appropriate. Sequent logic 
typically operates in what might called 'sideways' mode rather than 
top-down or bottom-up, where inference rules (sorry) manipulate both 
antecedent and consequents together. In any case, we have a perfectly 
good, precise terminology already for sequents, so why introduce an 
imprecise, confused one to replace it?

>  > BTW, the notion of Horn clause is not restricted to classical
>>  two-valued logic. It applies to any logic which has conjunction,
>>  disjunction and negation, which includes all those you mention here
>>  and many others.
>I don't understand your remark. Isn't it essential for the
>concept of a Horn clause that -p v q is equivalent to p -> q?

Not for the concept, no. For clauses (not just Horn clauses) to be a 
universal normal form requires reductions like this, yes.

>This equivalence does not hold, for instance, in intuitionistic
>or partial logic.

Not in intuitionistic, but there are clause-like normal forms for 
partial logics, I believe, though they are more complicated than in 
the classical case. But sequent rules for partial and intuitionistic 
logics are different than the analogous rules for classical  logics, 
as well.  Different logics are just, well, different.


IHMC					(850)434 8903   home
40 South Alcaniz St.			(850)202 4416   office
Pensacola,  FL 32501			(850)202 4440   fax
Received on Wednesday, 19 September 2001 19:38:24 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:13 UTC