W3C home > Mailing lists > Public > www-rdf-rules@w3.org > December 2003

Re: a simple question

From: Drew McDermott <drew.mcdermott@yale.edu>
Date: Mon, 8 Dec 2003 16:04:31 -0500 (EST)
Message-Id: <200312082104.hB8L4Vu08410@pantheon-po04.its.yale.edu>
To: www-rdf-rules@w3.org

   [Graham Klyne]
   Your above comment about an algorithm that is "widely used and endorsed by 
   several major truckers" suggests a possibility that such might be adopted 
   as part of the logical theory against which conclusions may be checked.  

I think this is what Dan Connolly was suggesting also: that the only
thing we can try to verify deductively is that the outputs we get 
actually do come from a known algorithm that is endorsed by some
authority, or has a verifiable track record, or the like.

This is all fine, but doesn't really justify the conclusion.  It
justifies the rationality of accepting the output of the program as
worth acting upon.


   >Maybe this is a good way to think about it: Many inferences are
   >justified by statements of the form, "Here's my conclusion and my
   >grounds for believing it; just try to refute it."  That is, checking
   >is not just a matter of verifying that each step is actually justified
   >by an inference rule.  It can also be a matter of trying to find a
   >better conclusion than the one offered.

   Hmmm... I don't think I follow.  I'm not sure if I'm stumbling on your "not 
   just a matter...", or on what constitutes a "better conclusion".

I'm not sure what I mean myself.  I think I was channelling the
philosopher John Pollock, who has attempted to develop a theory of
_defeasible_ reasoning, that is, reasoning whose conclusions can be
refuted by a better argument.  In practice, I don't think he's avoided
the usual quagmires of nonmonotonicity, but then I haven't really
followed his work in detail.  It just seems to me that there _ought_
to be a theoretical framework here for some notion of arguments

It's easy to find examples of this in domains like legal reasoning,
but they're so far from being formalizable (_pace_ those researchers who
work on "AI and Law") that they don't really give us a good grip on
the issues.  So how about this kind of example: Suppose party A
predicts the behavior of a physical system based on Model 1, and
party B predicts its behavior based on Model 2.  One model might be
more detailed than the other, one may make more assumptions, and so
forth.  The behavior predictions might be deductive, given the models,
but the arguments about which model is more appropriate might revolve
around the accuracy or believability of the information about the
state of the physical system.  These arguments might depend on
probabilistic considerations, and not be deductive at all.
Furthermore, a third party might find a model better than either of
the ones under contention.

                                             -- Drew McDermott
                                                Yale University CS Dept.
Received on Monday, 8 December 2003 16:04:32 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:15 UTC