Re: [RIF] Reaction to the proposal by Boley, Kifer et al

Hello,

This is regarding dealing with equality in Herbrand models. Here are
some facts that may be known to many but not all among this WG.

As Michael mentioned, the simplest and most natural way to model
equality in algebraic models is indeed using congruences. Recall that a
first-order Equational Theory (ET) is a conjunction of equations, each
being a pair of functional first-order terms - which may, or not,
contain variables, each pair defining its own (universally quantified)
variable scope. An ET engenders a congruence relation on the term
algebra (the set of Herbrand terms - i.e., ground).  Quotient algebras
are formed modulo such a congrence. Unification modulo such a theory in
one way to operationalize specific ETs for which an efficient method is
known (e.g., AC, ACI, Distributive, etc., ... - see the archives of the
UNIF conference for the last 25 years). Theory unification + Resolution
(e.g.) gives a means to have a clean algebraic semantics for several
collection types such as sets (ACI), multisets - or bags (AC), or
sequences (empty theory). Another substantial advantage of unification
modulo is that it is more efficient that using equational rewriting and
even converges where rewriting would diverge - e.g., for commutativity).

Anyway, all this take its roots in the works of Knuth and Bendix (KB)
(1970). In their seminal paper, they address the problem of computing
modulo an equational theory. Basically, each element in the quotient
Herbrand model is a set of Herbrand terms all congruent modulo the
equations. The best way to deal operationally with a congruence class is
to find a way to compute a unique canonical representative of the
class. KB proposed using equations as rewrite rules. Namely, given a
Herbrand (i.e., ground) term and an equational theory, this term may be
reduced to normal (or canonical) form by repeatedly rewriting it
replacing any part of it that matches the LHS of an equation with
substitution \sigma by the RHS\sigma. To do so and not run into circles
(think of commutativity) one needs to "orient" the equations (i.e., turn
"t_1 = t_2" into either "t_1 -> t_2" or "t_2 -> t_1"). If this can be
done in such that simplifying from left to right is done according to a
well-founded ordering on the Herbrand terms for this signature, and if
such an ordering is at hand, then the famous Knuth-Bendix completion
algorithm is a semi-decidable procedure to compute a complete and
confluent set of rewrite-rules for an equational theory. Note that the
KB algorithm is parameterized on the term ordering that orients the
equations.  In their original paper, KB themselves proposed a clever
term weighing ordering, where the weights of terms were computed from
those weights on function symbols in some clever way as to give lesser
weights to the "simpler" side of an equation if possible. Better term
orderings were proposed later, most notably the Recursive Path Ordering
based on multiset orderings (Nachum Dershowitz) and later extended and
improved into the Recursive Decomposition Ordering and Path of Subterms
Ordering by the Nancy Equation and Rewrite Rule School (Jean-Peierre
Jouannaud, Claude and Hélène Kirchner, Pierre Lescanne, Michaël
Rusinovitch, etc., ...). Again, look up each conference such as RTA,
UNIF, CADE, LPAR, ... in the last 25 years. The KB completion method was
later extended to using unification modulo theory for theories that have
such an algorithm (ACI, etc...), and even further to using general
constraint-solving rather than just Herbrand-term unification in
computing so-called critical pairs (the heart of the KB procedure which
consists of finding two potentially conflicting rules by successful
superposition of a pair of subterms, each from a different rule's LHS,
using unification).  Indeed, Herbrand-term maching and unification are
but specific instances of constraint entailment and conjunction,
respectively. Since the critical pairs computed by the KB method by
unification denote intersection of the set of ground instances of the
unified terms, one could replace unification by any complete
constraint-solving method checking the consistency of the conjuction of
two constraints since a solved form for the conjunction denotes the
intersection of the solutions of each.  And so was the KB completion
further extended using general constraint solving rather than just
computing Herbrand substitutions (which happen to be solved forms for
the particular constraint system of conjunction of equations among
Herbrand terms). These extensions were done by the Kirchners in the
1990's in Nancy, France. More recently, Claude Kirchner also extended
term-rewriting (using just Herbrand matching) into constraint-driven
rewriting (by replacing terms with constraints and matching with
constraint entailment) - the \rho-Calculus.

[Incidentally, the KB method is also just an instance of a larger family
of decision procedures based on subterm superposition. E.g.,
Buchberger's algorithm is the analog of the KB method in Polynomial
Ideal Algebra.]

Back to reasoning with equality in Herbrand models ...

If one is given an equational theory for which a complete and confluent
rewrite rules is available (as produced by the KB method, for example),
one acn then operationalize reasoning modulo such a quotient Herbrand
model (i.e., dealing with syntactically different but semantically equal
terms). Thus, the most general method to do automatic reasoning in FOL
modulo an equational theory while still having a clean declarative
quotient model semantics, is Equational Narrowing ("Surréduction", en
Français). Basically, Narrowing is Term-Rewriting using unification
instead of matching ("filtrage", en Français). This has for effect to
allow inverting functions, whereas Rewriting cannot. For example, take
the one-equation theory:

	f(a) = b.

which gets oriented as:

	f(a) -> b.
	
With matching, this can only rewrite the term "f(a)" to "b". Therefore,
reasoning using rewriting solves the equation "f(b) =?= a". However,
since Rewriting uses only matching cannot solve more general equations
with variables in position arguments such as "f(Y) =?= a" since no term
in this equation matches any rule's LHS. However, Narrowing uses
unification and can solve any such equation (provided the set of rules
in complete and confluent and the subterm selection strategy - e.g.,
innermost/outermost - is complete). Indeed, Narrowing, thanks to
unification, allows "guessing" the argument (thus effectively inverting
the function) whereas matching does not. Because of this power similar
to that of resolution-based model of computation exhibited by Prolog,
and its pure model semantics, Narrowing has been adopted as the choice
computational method by a growing family of Functional Logic Programming
languages (see, to name just one, the Curry language and the work of
Michael Hanus).

Another method for dealing with ET's is not by using Narrowing which may
be costly (as in Prolog, to be complete, general Narrowing is not a
non-deterministic procedure since equations may have several solutions -
e.g., inverting a non-injective functions). If one wishes to retain
determinacy of function reduction while allows Herbrand unification of
terms containing both constructors and interpreted function symbols,
where such interpreted symbol are defined by rewrite rules, then one can
use "residuation". It consists of delaying solving an equation as long
as an interpreted function has all its arguments instantiated enough to
match (rather than unify with) a rule's RHS. This method is used by the
Functional Logic Programming langage LIFE that I have designed and
developed with colleagues over several years (LIFE stands for Logic,
Inheritance, Functions, Equations). Using residuation vs. narrowing has
the advantage of keeping functional reduction determinate at the expense
of being an incomplete equation solver (since it does not invert
functions). However, it allows more declarative logic programs (since
order of unification is unimportant, one can change prohibitively
inefficient generate-and-test logic programs into surprisingly efficient
test-and-generate programs simply by inverting the order of constraints
and value labeling - the equations delayed by residuation in the test
phase (still missing there value labels) act as powerful search space
pruners. So we prefer LP with equality using residuation rather than
narrowing because it is more declarative than Prolog (which cannot deal
with syntactically different but sematically equal terms) and more
efficient to boot! [In other words, as my late father used to say, "I'd
rather be wealthy and healthy than poor and sickly! ..." :-)]

Thank you for your attention.

-hak


PS/ I did not put any links or given any specific references, but used
     enough buzzwords and cited enough names for anyone to use on Google
     for more information of any notion or person mentioned in this
     message... :-)


-- 
Hassan Aït-Kaci
ILOG, Inc. - Product Division R&D
tel/fax: +1 (604) 930-5603 - email: hak @ ilog . com

Received on Friday, 5 May 2006 23:42:32 UTC