Re: plain rules, please [was: Semantic Web Rule Language (SWRL) 0.5 released]

   [Graham Klyne]
   Can you please point me at a resource that explains the precise distinction 
   between "deduction" and other forms of inference?

Consulting my agent undergraduate logic textbook (by Angelo Margaris,
published 1967), under "deduction" in the index we find a definition
of "a" deduction, namely, a series of formulas that are either
axioms or result from application of an inference rule from previous
formulas.  Then one could say that "deduction" (the technique) is
whatever comes at the end of a "deduction" (the series of formulas).
But that's not terribly enlightening.

A better definition comes by taking into account the semantics of
logical languages (found in another chapter).  Anything that can be
deduced is true in all models of a theory (and, if the theory is
complete, vice versa).  This is the reason that deduction is
conservative: if you can think of any interpretation of the given
facts, no matter how wild, in which the statements you start with are
true, then if P is false in that interpretation it cannot be deduced.
(Unless the statements you start with are inconsistent, in which case
there _are_ no interpretations that make them all true.)

When one philosopher says "P is possible," and the other retorts that
it's "only logically possible," it's exactly this sense of possibility
they have in mind.  Those who expect great things from deduction hope
to make many commonsense inferences logically necessary by supplying
the appropriate axioms.  For instance, we'd like to infer that you
know your name.  It may be physically impossible, or incredibly
unlikely, that you have forgotten your name, but it's not logically
impossible unless we supply an axiom that says "Everybody knows their
own name."  Then we think of the possibility of Alzheimer's, and
realize that this is trickier than we thought.

Techniques like probabilistic reasoning with Bayes nets can be thought
of as deductive or nondeductive, and it is easy to slip from one mode
to the other without realizing it.  Let's assume that there is a
deductive theory in which a Bayes net and its boundary conditions can
be described, and the conclusions you arrive at are precisely those
licensed by the usual algorithms.  (Actually expressing this theory is
probably harder than you think, but let that pass.)  Now we will have
a theorem such as P("Klyne knows his name", 0.9999976).  So far,
deduction.  But if we slip to "Therefore, Klyne knows his name," we
have interpreted the conclusion nondeductively.

Decision theorists can postpone the inevitable one step further by
having all _behavior_ depend only on expected utilities rather than
beliefs.  I don't need to actually _believe_ that Klyne knows his
name; I just have to realize that if I want to answer the question
"Does Klyne have a middle name?" the action with the highest expected
utility is to send him an e-mail message with the question.  One
problem is that to prove that an action has the highest expected
utility I have to be able to reason about all possible actions, not by 
running through an explicit list, but somehow.  Another problem is
that it is much more efficient to reason in terms of possibly wrong
beliefs than in terms of certain probabilities.  In the present
example, I'd like to believe that after asking Klyne the question and
getting the answer I will then know whether he has a middle name.  But
all I can conclude is that the conditional probability of "Klyne has a
middle name" given that he replies "No" is 0.001495.  (It's much
higher than you'd expect because of the chance that he may conceal the
truth, not out of malice, but in order to spoil the example.)

                                             -- Drew


P.S. One might object that I can't really be certain about the
probabilities, not to very many significant digits.  No, but you'll
almost certainly never be contradicted if you act as though these
numbers really are completely accurate.


-- 
                                             -- Drew McDermott
                                                Yale University CS Dept.

Received on Monday, 1 December 2003 15:58:17 UTC