- From: pat hayes <phayes@ihmc.us>
- Date: Thu, 18 Dec 2003 12:19:30 -0800
- To: Drew McDermott <drew.mcdermott@yale.edu>
- Cc: www-rdf-rules@w3.org
- Message-Id: <p06001f16bc07a98efa53@[192.168.1.11]>
(Sorry about delay in responding)
I think we are talking at cross purposes and in fact agree on almost
everything except rhetoric.
----------
[me]
>The NAF approach is likely to be much more efficient, much easier to
>implement, and much more likely to yield a useful conclusion than the
>heavy-duty theorem prover.
[Pat Hayes]
All true. It is also likely to be wrong,
unfortunately. The fact that you can't think of a
closer airport doesn't usually qualify as a good
reason to conclude that there isn't one, unless
you also know for sure that you know all the
airport locations, so that if you don't know it,
then its not there. Like, for example, if you
have a list of all the airports. If you make
this explicit, as you should, then you are back
doing 'heavy-duty' reasoning.
I was trying to stay within the vocabulary of the example, and I was
assuming a plausible context that I didn't state, namely that someone
was planning a trip.
----
And you thereby illustrate my point. When things are published on the
SWeb, they immediately LOSE their assumed context. Rules written
assuming a particular context tend to fail, potentially disastrously,
when used out of that context. NAF is a particularly acute example of
this, which fails totally under even a slight change in context.
----
If you replace "nearest airport" by "nearest
airport reasonable to travel someplace from here," then negation as
failure is a reasonable strategy, assuming you know all the airports
in the vicinity.
----
Of course it is a reasonable strategy to USE, particularly if make
that assumption explicit. In fact if you make it explicit enough,
then NAF ceases to be 'non-deductive' (or whatever other silly label
you want to attach to it) and becomes a perfectly valid monotonic
inference. But in any case Im not arguing that NAF should not be
used, when the user knows what they are doing, as a fast heuristic
method (or even non-heuristic, if used properly) . I am saying that
as a general inferential strategy it is a very, very bad idea to rely
on it, particularly applied outside its original context.
There is a fascinating literature in applied psychology on efficient
inference strategies which rely on 'lack of knowledge' to draw
conclusions, which generalize NAF in fact. (Girgenzer calls it 'the
recognition heuristic') So yes, of course this kind of reasoning is
useful, because efficient. But saying it is USEFUL is not to say that
it should be assumed as a semantic basis for information exchange on
the semantic web.
This confuses two issues: strategies for useful reasoning are one
thing, justifications of conclusions are another. We need both, but
we need to keep their roles clearly distinguished. To point out that
NAF is not a good foundation for truth-justification in general is
not to say that all SW reasoning must be done by clunky
general-purpose inference engines.
You know, Drew, it is slightly irresponsible of you to be airing
these old debates in such a forum at this stage in history, IMO. We
have had this battle in AI/KR, and surely we have done it to death
and now all understand these matters reasonably well. If we re-open
the procedural/assertional debate now, particularly using the old
question-begging terminology of mutual recrimination (neat/scruffy,
proceduralist/logicist, etc.) we will NEVER get any useful work done.
-----
BTW, calling it 'heavy-duty' is misleading. In
the first case you have made all the equality
reasoning explicit. In a prolog-style
implementation this is all buried in the
backtracking done by the interpreter: but it
still needs to be done. The same actual
*reasoning* is involved in both cases.
Yes. But the NAF version is stylized in a way that permits efficient
implementation.
----
Permits?? Are you implying that the use of a monotonic logic somehow
*forbids* efficient implementation ??
There is a deep-seated fallacy surfacing here, to the effect that the
use of logic (or indeed anything else, but it seems to be usually
invoked by the use of the L-word) as a representational language
*requires* that a certain kind of mechanism be used to process it. If
you use logic for KR, you are *obligated* to use a general-purpose
complete logical reasoning engine, for example: or if you say 'equal'
then you must use a GPCLRE which draws conclusions using
paramodulation, or whatever. This is nonsense. You can 'do' equality
reasoning by iterating along a list if you like. It will be
incomplete, of course, but most efficient reasoners are incomplete:
so what? Nobody is saying that the use of a logical KR language
requires all reasoners to be complete. It is logically sound to just
not draw any conclusions at all, for one thing.
(This confusion between KR language and process is so ubiquitous that
it deserves a name: how about the "MIT-Yale fallacy" ? Ah, but that
would be an unworthy suggestion, reminiscent of the bad old days when
people swore vengeance on the bodies of their rejected conference
submissions.)
-----
If you could be sure that the alternative always
involved iterating through a list and doing a set of equality
substitutions, you could probably find an equally efficient
implementation. (I've often wondered why no one has worked on this.)
In the general case, though, you have to have a system that does
general-purpose reasoning about equality, which can involve a lot of
search.
----
You do not HAVE to have any kind of system. At some level, all
reasoning about equality is "general-purpose": after all, equality is
a pretty generic kind of thing to reason about. What I think you mean
here, though, is not the kind of *reasoning*, but the kind of
reasoning *process* or *strategy* that must be used: and then what
you say is just flat wrong. A reasoner is not in any way OBLIGATED to
use a complete inference method to handle equality. In fact a
reasoner is not even obligated to use a valid or guaranteed correct
inference method. It might for example cut corners by assuming names
are unique. its conclusions will not be valid, in general, but
nothing in the semantic specification of the language requires that
all reasoners only perform valid inferences. The spec only guarantees
that IF you conform then your conclusions will be as sound as your
assumptions.
----
> I hope the people who deprecate it realize
>that the heavy-duty theorem prover is the only alternative.
-----
I should have pointed out more forcibly how totally false this is,
like a Microsoft salesman saying that only alternative to Word is
pencil and paper. And the use of 'heavy-duty' is a rhetorical
flourish that hardly bears close examination. Some of the
heaviest-duty software ever written spends its time doing database
NAF-style reasoning.
-----
Its not a matter of alternatives. If you want to
draw checkable valid conclusions, then you need
to do this kind of reasoning.
I don't want to draw checkable valid conclusions.
----
Fine: then do whatever you wish in your domain of application. But
when PUBLISHING your rules, I think it is not unreasonable to have a
global requirement (or at any rate a code of good practice) that
whatever you publish, you are responsible for saying what it means
clearly enough for others to use it. If you publish rules that only
work in an unstated context and which fail elsewhere, without any
indication that this is true, then you are acting at best
irresponsibly; and I would like the overall SW specs to say that you
are acting in way that fails to conform and is deprecated.
----
If you want to
make random guesses and hope for the best then
you can of course work faster, but don't expect
others to believe in your conclusions.
At least I'll _have_ conclusions.
----
The rhetorical point being that any poor fool who trusts to a clunky
FO theorem-prover won't get any in a single lifetime, right? Drew,
where have YOU been?? Moore's law and about 20 years of dedicated
hacking better unifiers, etc., has made even general-purpose
reasoners quite able to handle a lot of useful cases. Not nearly to
the scale obtainable with DLs or database technology, of course, but
still of some utility.
-----
Negation-as-failure is NOT a good general
reasoning strategy: 99.99% of the time it will
immediately produce childishly ludicrous
conclusions: I don't know anyone called Jose, so
there isn't anyone called Jose; I never heard of
SARS, ...
Where have you been?
Of course negation-as-failure is not the way to handle "not" in
general; it's the way to handle it when you don't care about possible
nearby secret airports and the like.
-----
and when you have some reason to suppose that airports that you care
about are known to you. Fine: so make this assumption explicit
somehow, and then NAF as an efficient inference *method* is freely
available for exchange, since *once that assumption is made
explicit*, NAF is monotonic (and hence a perfectly good form of
'syllogism', to use the ignorant terminology which started this
thread.)
The
industrial uses of Prolog-style rules all are
designed within controlled environments,
typically using databases, where such special
conditions can be assumed.
To repeat what I said above, if you use NAF as an efficient way to
draw valid conclusions, you're right. I prefer to think of it as a
way to draw conclusions that may well be wrong, in situations where
the wrongness of a probably correct conclusion is not fatal. The
burden is on someone who finds this distasteful to show that pure
deductive techniques will suffice for real-world applications.
-----
No, it has got nothing to do with showing anything about techniques.
People should, and will, use whatever techniques they find useful,
and good luck to them. None of the SW specs (RDF, RDFS, OWL) say
anything about what techniques can or must be used to process these
languages (except for owl:imports).
The burden is to show how conclusions generated in this way can be
published without misleading someone who is unaware of the context
in which they were derived, and to provide for ways of publishing the
rules themselves so that their assumed preconditions of use can be
made clear. Several ways have been suggested, including having a
distinct 'failure-negation' . My own favorite is to have a notation
for saying explicitly that some ontology is a closed world as far as
a namespace is concerned, and then NAF is just plain valid when
applied properly,and NAF and logicism can coexist on the Web happily.
.
Pat
----
-- Drew
--
---------------------------------------------------------------------
IHMC (850)434 8903 or (650)494 3973 home
40 South Alcaniz St. (850)202 4416 office
Pensacola (850)202 4440 fax
FL 32501 (850)291 0667 cell
phayes@ihmc.us http://www.ihmc.us/users/phayes
Received on Thursday, 18 December 2003 15:36:24 UTC