- From: Bijan Parsia <bparsia@isr.umd.edu>
- Date: Fri, 19 Dec 2003 23:31:16 -0500
- To: Drew McDermott <drew.mcdermott@yale.edu>
- Cc: pat hayes <phayes@ihmc.us>, www-rdf-rules@w3.org
On Dec 19, 2003, at 9:08 PM, Drew McDermott wrote: > [Pat Hayes] > I think we are talking at cross purposes and in fact agree on almost > everything except rhetoric. > > Probably. I certainly agree on much of what you say in your posting. > However, .... Including, grumble grumble, quoting style. Leading whitespace often gets mangled! > ---------- > > What follows are several quotes from Pat's posting to the effect that > we can and should distinguish "techniques" from "justifications": > > This confuses two issues: strategies for useful reasoning are one > thing, justifications of conclusions are another. > ... > There is a deep-seated fallacy surfacing here, to the effect that > the > use of logic (or indeed anything else, but it seems to be usually > invoked by the use of the L-word) as a representational language > *requires* that a certain kind of mechanism be used to process it. > If Hmm. Is that really what's surfacing? I mean, there's a fairly natural bias toward, if possible and practical, using sound & complete decision procedures. Or, at least, sound & complete reasoning mechanisms. But I think the If Possible and Practical codicil is firmly in place. Sure, lots of semweb reasoning will be done by random Perl and Python scripts (or fairly cheap prolog hacks). But there's some sort of difference between acknowledging that and wanting to privilege "certain kinds of processing mechanisms" for interoperability purposes. And interop is somewhat the name of the game, I'm pretty sure. > ... > No, it has got nothing to do with showing anything about > techniques. > People should, and will, use whatever techniques they find useful, > and good luck to them. None of the SW specs (RDF, RDFS, OWL) say > anything about what techniques can or must be used to process these > languages (except for owl:imports). Surely they do, at least by their silence. If you are going to use those specs as the basis of judgments about interoperability or correctness of inferences, then it's pretty clear that a variety of classes of inference procedures are going to be incorrect. And since they can be incorrect in different ways, they will have interoperability issues. [snip] > We need both, but > we need to keep their roles clearly distinguished. To point out that > NAF is not a good foundation for truth-justification in general is > not to say that all SW reasoning must be done by clunky > general-purpose inference engines. > > It's this distinction between techniques and justifications that I > want to deny. I wish it were not so. It seems to me that if you two are disagreeing on this, then you are disagreeing on quite a bit ;) Including a bit which has some consequences for the task at hand, specifying and standardizing representation languages. [snip nice paragraph that seems straight out of Critque of Pure Reason] So, Drew, is there any evolution in your position in CoPR and that paragraph? In your experience? > In the following fragment, I believe you overstated your case: > > In fact a reasoner is not even obligated to use a valid or > guaranteed > correct inference method. It might for example cut corners by > assuming > names are unique. its conclusions will not be valid, in general, but > nothing in the semantic specification of the language requires that > all reasoners only perform valid inferences. > > If you really stand by this, then there really is no difference in our > positions. The assumption set, in this case, will include an > assumption that "the algorithm did not err on this occasion." How > would one check that without reopening the original question? [snip] Er...isn't the difference that you think the assumption isn't checkable (in fact) whereas Pat thinks that it is? Cheers, Bijan Parsia.
Received on Friday, 19 December 2003 23:33:45 UTC